High-Risk AI System Explained
The term high-risk AI system sits at the center of most modern AI regulation. The Colorado AI Act, EU AI Act, NIST AI RMF, and ISO/IEC 42001 all use a version of the concept — though with different definitions and scopes.
Colorado AI Act (the U.S. anchor)
Under C.R.S. § 6-1-1701, a high-risk artificial intelligence system is an AI system that, when deployed, makes — or is a substantial factor in making — a consequential decision.
A consequential decision is one that has a material legal or similarly significant effect on a consumer's access to or terms of:
- Education enrollment or opportunity
- Employment or employment opportunity
- Financial or lending services
- Essential government services
- Healthcare services
- Housing
- Insurance
- Legal services
Not every AI system is high-risk. The Colorado Act explicitly excludes from "high-risk" several categories where the AI is performing narrow procedural tasks — anti-fraud, anti-cybersecurity, calculator/spreadsheet-style functions, and a list of others enumerated in § 6-1-1701(7)(b).
Parallel concepts
EU AI Act
The EU AI Act defines high-risk AI in Annex III by enumerated categories: biometric ID, critical infrastructure, education, employment, essential public/private services, law enforcement, migration, and democratic processes. The structural overlap with the Colorado Act is significant — both target consumer-affecting decisions in similar domains — though the EU's prohibitions and conformity-assessment regime go further than Colorado's.
NIST AI RMF
NIST AI RMF does not use the term "high-risk" as a binary classifier. Instead, the MAP function asks the organization to characterize impact severity per system. Organizations operationalize this by applying more stringent controls (more frequent measurement, more rigorous oversight) where impact severity is higher.
ISO/IEC 42001
ISO/IEC 42001 follows a similar approach via Annex A.5 (AI system impact assessment) — the standard requires the organization to assess each system's impact on individuals, groups, and society, and apply controls proportionate to assessed risk. The standard does not impose a high-risk binary.
Why the binary matters in U.S. law
Under Colorado law, high-risk classification triggers:
- Annual impact assessments (§ 6-1-1703(3))
- Mandatory consumer disclosure when used in a consequential decision (§ 6-1-1703(4))
- Right-to-correct and right-to-appeal for affected consumers
- Developer documentation obligations (§ 6-1-1702)
- AG notification on discovery of algorithmic discrimination
If a system is not high-risk, most of these obligations do not apply. So the threshold determination is a critical compliance choice.
How to determine if your system is high-risk (Colorado test)
- Does the system make or substantially factor into a decision? Substantial-factor analysis — does the AI's output materially drive the human decision-maker, or merely inform?
- Is the decision "consequential" — affecting access to one of the 8 enumerated areas (education, employment, finance, government services, healthcare, housing, insurance, legal)?
- Does an exception apply? Anti-fraud, cybersecurity, narrow technical functions, and other carve-outs in § 6-1-1701(7)(b) may exempt the system.
If yes-yes-no, the system is high-risk.
Practical implications
- Fine-tuning a foundation model for credit decisions → high-risk
- Using GPT-style chat for internal documentation → not high-risk
- Resume screening tool that ranks candidates → high-risk for employment
- Spam filter on customer support email → not high-risk (narrow technical function)
Documentation expectations
For every high-risk system, expect to document:
- Intended uses and out-of-scope uses
- Data sources and retention
- Performance metrics by demographic slice
- Bias-test methodology and results
- Human-review trigger conditions
- Post-deployment monitoring plan
Use the Impact Assessment Generator to produce a baseline document.
Cross-references
- Colorado AI Act detail — the source of the U.S. anchor definition
- Federal vs state pillar — how state high-risk concepts interact with federal frameworks
- EU AI Act vs US state laws — comparison of high-risk taxonomies