AI Deployer Obligations Under US Law
Deployers operate AI systems to make or substantially factor into decisions affecting consumers, employees, or other regulated parties. Most US state AI laws place the heaviest compliance burdens on deployers, including impact assessments, disclosures, and post-deployment monitoring.
Obligations under US laws
Provide consumers with a right to correct incorrect personal data and a right to appeal adverse consequential decisions to a human reviewer where technically feasible.
Deadline: ongoing
Disclose to consumers when a high-risk AI system is being used to make a consequential decision affecting them, including the system's purpose, the nature of the consequential decision, contact information, and the right to opt out where required.
Deadline: before_decision
Complete an annual impact assessment of each high-risk AI system, addressing purpose, intended outputs, performance metrics, transparency measures, post-deployment monitoring, and risks of algorithmic discrimination.
Deadline: annually
Provide notice to employees and applicants when AI is being used to make employment-related decisions covered by the amended IHRA.
Deadline: at_use
Refrain from using AI that has the effect of subjecting employees or applicants to discrimination on the basis of protected classes under the Illinois Human Rights Act in employment decisions.
Deadline: ongoing
Provide candidates and employees who reside in NYC with at least 10 business days advance notice of AEDT use, including job qualifications, characteristics assessed, and instructions for requesting an alternative selection process or reasonable accommodation.
Deadline: 10_business_days_before_use
Publicly post a summary of the most recent bias audit results on the employer's website, including the date the AEDT was first used and the source of the data.
Deadline: ongoing
Subject the Automated Employment Decision Tool to an annual independent bias audit calculating selection rates and impact ratios across race/ethnicity and sex categories prior to use, then on a yearly basis.
Deadline: annually
- disclosureTexas Responsible Artificial Intelligence Governance Act (TRAIGA)Tex. Bus. & Com. Code § 552.x
Provide clear and conspicuous disclosure to consumers when they are interacting with an AI system in a manner where a reasonable consumer might believe they are interacting with a human.
Deadline: at_interaction
Persons in regulated occupations using generative AI must proactively disclose at the start of an interaction that the consumer is interacting with generative AI; other consumer-facing entities must disclose upon consumer inquiry.
Deadline: at_interaction
Framework controls
Provide information to users and affected stakeholders about the AI system's intended use, capabilities, limitations, and how to interpret outputs per Annex A.8 controls.
Maintain documentation throughout the AI system lifecycle including data management, system development, verification and validation, and deployment per Annex A.6 controls.
Conduct AI system impact assessments and risk assessments addressing intended uses, deployment context, affected stakeholders, and mitigation of identified risks per Annex A.5 controls.
Establish, implement, maintain, and continually improve an AI management system (AIMS) covering policies, leadership commitment, roles, and integration with other management systems.
MANAGE function: prioritize and treat identified risks, allocate resources, and implement risk response strategies including mitigation, transfer, acceptance, or avoidance.
MEASURE function: assess, analyze, and monitor AI risks using both quantitative and qualitative methods, including bias evaluation, robustness testing, and explainability assessments.
MAP function: identify the context, intended uses, stakeholders, and risks of each AI system, including categorization of impacts on individuals, communities, and the organization.
GOVERN function: establish policies, processes, structures, and accountability for AI risk management across the organization, including senior leadership oversight and a risk-based culture.
We may receive referral commissions from recommended compliance tools. Recommendations are based on product fit and not on commission size. Links marked “partner link” include a tracked redirect.