AI Developer Obligations Under US Law
Developers create, train, or substantially modify AI systems before they reach deployers and end users. Under most US AI laws, developers carry distinct obligations around documentation, transparency, and bias mitigation that are independent from any deployer obligations.
Obligations under US laws
Publicly post on the developer's website a high-level summary of training datasets used for any generative AI system or service made available to Californians on or after January 1, 2022.
Deadline: by_2026-01-01
- documentationTransparency in Frontier Artificial Intelligence Act (TFAIA)Cal. Health & Safety Code § 22757.10
Report critical safety incidents to the California Office of Emergency Services within statutory timeframes.
Deadline: within_statutory_timeframe
- transparencyTransparency in Frontier Artificial Intelligence Act (TFAIA)Cal. Health & Safety Code § 22757.10
Publish a transparency report prior to deploying a new frontier AI model, summarizing pre-deployment assessments and mitigations.
Deadline: before_deployment
- governanceTransparency in Frontier Artificial Intelligence Act (TFAIA)Cal. Health & Safety Code § 22757.10
Publish a written frontier AI safety framework describing how the developer assesses and mitigates catastrophic risks from frontier AI models, with periodic updates.
Deadline: ongoing
Apply latent disclosures (e.g., watermarking or provenance metadata) to AI-generated image, video, and audio content produced by the covered provider's system.
Deadline: ongoing_from_effective_date
Maintain a free, publicly available AI detection tool allowing users to assess whether content is generated or modified by the covered provider's AI system.
Deadline: ongoing_from_effective_date
Provide deployers with documentation including intended uses, harmful or inappropriate uses, data summaries, performance evaluations, mitigation measures, and information necessary for deployers to complete their impact assessments.
Deadline: before_deployment
- governanceTexas Responsible Artificial Intelligence Governance Act (TRAIGA)Tex. Bus. & Com. Code § 552.x
Refrain from developing or deploying AI systems with the intent to engage in unlawful discrimination against protected classes under Texas or federal law.
Deadline: ongoing
Framework controls
Maintain documentation throughout the AI system lifecycle including data management, system development, verification and validation, and deployment per Annex A.6 controls.
Conduct AI system impact assessments and risk assessments addressing intended uses, deployment context, affected stakeholders, and mitigation of identified risks per Annex A.5 controls.
Establish, implement, maintain, and continually improve an AI management system (AIMS) covering policies, leadership commitment, roles, and integration with other management systems.
MANAGE function: prioritize and treat identified risks, allocate resources, and implement risk response strategies including mitigation, transfer, acceptance, or avoidance.
MEASURE function: assess, analyze, and monitor AI risks using both quantitative and qualitative methods, including bias evaluation, robustness testing, and explainability assessments.
MAP function: identify the context, intended uses, stakeholders, and risks of each AI system, including categorization of impacts on individuals, communities, and the organization.
GOVERN function: establish policies, processes, structures, and accountability for AI risk management across the organization, including senior leadership oversight and a risk-based culture.
We may receive referral commissions from recommended compliance tools. Recommendations are based on product fit and not on commission size. Links marked “partner link” include a tracked redirect.