EU AI Act vs US State Laws — Compliance Comparison
For compliance teams operating in both Europe and the United States, understanding where the EU AI Act and US state AI laws overlap and diverge is essential to scoping a single coherent program.
Quick summary
| Dimension | EU AI Act | US state AI laws |
|---|---|---|
| Geographic scope | All 27 EU member states + extraterritorial | Per-state (CO, TX, NYC, IL, UT, CA, NJ proposed, etc.) |
| Risk categorization | Banned / High-risk / Limited / Minimal | Generally binary: high-risk (Colorado) or specific use case (NYC LL 144 hiring, etc.) |
| Conformity assessment | Yes — pre-market for high-risk | No equivalent; impact assessments instead |
| GenAI rules | Foundation-model obligations + GPAI tier | CA SB 942 (transparency), AB 2013 (training data), SB 53 (frontier safety) |
| Penalties | Up to €35M or 7% global turnover | Per-violation: $1.5K (NYC LL 144), $10K (Texas), $20K (Colorado), $1M (CA SB 53) |
| Enforcement body | Member-state authorities + AI Office | State Attorneys General |
| Effective date | Phased 2024-2027 | Various 2023-2026 |
Where they overlap (build once, satisfy both)
- High-risk AI categorization: EU Annex III and Colorado consequential-decision lists overlap substantially in employment, healthcare, education, financial services, housing, and government services.
- Documentation duties: developer documentation under Colorado § 6-1-1702 + EU AI Act technical documentation = largely the same artifacts.
- Impact assessment / conformity assessment: structurally similar — system purpose, data, performance, bias, monitoring.
- Bias and discrimination testing: NYC LL 144 bias audit + EU AI Act Annex IV testing = parallel methodologies.
- Provider transparency: EU AI Act transparency obligations (e.g., Article 50 on AI-generated content) + California SB 942 = both require synthetic-content disclosures.
- Foundation-model / frontier AI rules: EU GPAI obligations + California SB 53 = transparency frameworks, safety reporting, incident reporting.
Where they diverge
Banned AI
The EU AI Act bans entire categories of AI: social scoring by public authorities, manipulation that causes harm, exploitation of vulnerabilities, real-time biometric ID in public spaces (with narrow exceptions), workplace emotion inference, untargeted facial-image scraping, and predictive policing in some contexts.
No U.S. state law has comparable prohibitions of scope. Texas TRAIGA prohibits AI developed for unlawful intentional discrimination but does not prohibit social scoring by private parties.
Pre-market conformity assessment
EU high-risk AI must complete a conformity assessment before placement on the market, often through a notified body. U.S. state laws use post-deployment impact assessments instead. The pre-market regime in EU effectively requires the documentation to exist before launch; U.S. laws permit launch with periodic compliance.
Penalty calibration
EU AI Act penalties scale with global turnover for the most serious infractions (up to €35M or 7%). State law penalties are per-violation flat amounts. EU penalty exposure is generally much higher for large enterprises but with narrower applicability scope.
Enforcement structure
EU enforces through Member-State competent authorities, coordinated by the AI Office. U.S. enforces through state AGs (with NYC DCWP as a city-level outlier). EU has a unified single market for the regulation; U.S. is fragmented per state.
A unified compliance program
For teams operating in both regions:
1. Adopt ISO/IEC 42001 as the management-system anchor
ISO/IEC 42001 provides a certifiable AI management system that maps cleanly to both EU AI Act conformity expectations and U.S. state AI law obligations. Certification is a defensible marker of due care under both regimes.
2. Use NIST AI RMF as the technical-control library
NIST AI RMF provides specific technical controls (in MEASURE) that satisfy both EU technical-documentation and U.S. impact-assessment requirements.
3. Run pre-market conformity for EU + post-deployment refreshes for U.S.
The EU pre-market regime is the more demanding gate. Building for the EU pre-market and refreshing annually under Colorado-style rules is operationally efficient.
4. Build state-specific addenda
Most state laws have 1-3 specific add-ons that aren't in the EU AI Act:
- NYC LL 144 public bias-audit summary on website
- Colorado AG notification on discovered discrimination
- California SB 53 Office of Emergency Services incident reporting
- Utah proactive GenAI disclosure for regulated occupations
Keep these as state-specific addenda in your master compliance plan.
5. Federal preemption watch
The December 2025 U.S. EO on state-AI-law preemption does not currently affect EU obligations and may or may not affect state-law enforceability. EU AI Act enforcement is independent of U.S. preemption activity.
Common pitfalls for U.S.-based teams entering EU compliance
- Underestimating extraterritorial scope: EU AI Act applies if your AI's outputs are used in the EU, even if you have no EU office.
- Missing GPAI obligations: large foundation models have specific transparency and copyright disclosure obligations under EU GPAI tier.
- Failing to maintain technical documentation: EU pre-market requires comprehensive technical docs; lazy startups assume post-hoc reconstruction is acceptable. It is not.
- Ignoring real-time biometric prohibition: many U.S.-built systems include face/voice recognition that is subject to specific EU restrictions.
Cross-references
- Federal vs state AI law — U.S. federal layer
- High-risk AI system explained — concept comparison
- Colorado AI Act — closest U.S. analog to EU framework
- NIST AI RMF, ISO/IEC 42001 — bridge frameworks