AI Impact Assessment Template & Methodology

An AI impact assessment is a structured analysis of an AI system's intended use, affected stakeholders, performance, biases, and mitigations. It is the core compliance artefact under most modern AI regimes — Colorado AI Act, NIST AI RMF MAP function, ISO/IEC 42001 Annex A.5, and the EU AI Act conformity assessment.

If you're looking for the practical generator: skip to the Impact Assessment Template Generator tool — it produces a Markdown template tailored to your jurisdictions and role.

Why impact assessments matter

The Colorado AI Act § 6-1-1703(3) requires deployers of high-risk AI systems to complete an impact assessment annually, with prescribed contents:

  • The system's intended uses
  • Outputs the system produces
  • Performance metrics
  • Transparency and explainability measures
  • Post-deployment monitoring plan
  • Risks of algorithmic discrimination identified and mitigated

Failure to complete and document the assessment is enforceable as a deceptive trade practice under Colorado's Consumer Protection Act. Civil penalties up to $20,000 per violation under C.R.S. § 6-1-112.

ISO/IEC 42001 Annex A.5 imposes parallel obligations on organizations seeking certification. NIST AI RMF MAP function structures the analysis but is voluntary.

Who runs the assessment

  • Deployer-led, with developer cooperation: in most laws, the deployer carries the impact-assessment obligation. The developer has a parallel duty to provide the deployer with documentation needed to complete it (Colorado AI Act § 6-1-1702).
  • Cross-functional team: typical composition includes risk/compliance owner (drives), engineering/product (provides system facts), legal (interprets obligations), and a stakeholder representative (operations, customer success).

Structure of a defensible impact assessment

1. System purpose and intended use

Describe what the system does, what decisions it informs, and the explicit out-of-scope uses (systems are typically misused outside their intended scope).

2. Applicable jurisdictions and frameworks

List every applicable law (CO AI Act, NYC LL 144, etc.) and the framework you've adopted as control baseline (NIST AI RMF, ISO 42001).

3. Stakeholder impact analysis

For each stakeholder group (consumers, operators, third parties), describe the impact category and severity (1-5 scale recommended) and current mitigations.

4. Data inventory

For each dataset used in training, fine-tuning, and operations: source, PII categories, sensitive attributes, retention period.

5. Algorithmic discrimination assessment

Protected categories considered, bias testing methodology, observed disparity metrics (selection rate, equal opportunity, demographic parity), remediation actions.

6. Performance and accuracy

Key metrics with threshold, observed value, and trend. Include error-type breakdown (false positive, false negative).

7. Transparency and disclosure

What the consumer is told, when, and where. Documentation provided to deployers / downstream parties.

8. Human oversight and review

Review trigger conditions. Appeals process for affected consumers.

9. Post-deployment monitoring

Performance drift, bias re-evaluation, incident counts. Cadence and threshold for action.

10. Risk register and treatment

For each identified risk: likelihood, impact, score, treatment (mitigate/transfer/accept/avoid), owner, review date.

11. Sign-off

Risk owner, compliance, engineering/product, legal sign-offs with dates.

Practical methodology — how to actually run one

Step 1: Build a shared document (a few hours)

Use the Impact Assessment Template Generator to start. Customize for your stack.

Step 2: Information gathering (1-2 weeks)

  • Engineering provides: data sources, model architecture, performance metrics by slice, monitoring instrumentation
  • Product/operations provides: intended uses, customer-facing flows, success metrics
  • Legal/compliance provides: applicability map, jurisdiction-specific obligations

Step 3: Stakeholder review (1 week)

Walk affected business units through the draft. Capture risks and mitigations they identify.

Step 4: Bias testing (variable, 1-4 weeks)

If not already in place, run pre-deployment bias evaluation. Industry-standard methodologies: disparate impact analysis (4/5 rule), demographic parity, equalized odds, calibration. Choose based on legal context.

Step 5: Documentation finalization and sign-off (1 week)

Log risks in your risk register. Get sign-offs. Lock the document with a version date.

Step 6: Annual refresh

Under Colorado AI Act and ISO 42001, the assessment must be refreshed annually or when material changes occur.

What does "material change" mean?

Under Colorado AI Act, material change includes:

  • Substantial modification to the AI system or its underlying data
  • New use case or expanded deployment context
  • Identification of unmitigated risks of algorithmic discrimination

Setting an internal threshold for what counts as substantial helps avoid annual-only assessment churn.

Common mistakes

  1. Treating it as a checkbox: assessments that don't actually inform decisions are evidence of bad faith if challenged. Make outputs feed real product decisions.
  2. One-time and forgotten: the obligation is ongoing. Build a refresh cadence.
  3. Generic copy-paste: assessments must be specific to the system. Generic language may fail in litigation or regulatory review.
  4. Skipping stakeholder review: missing operational risks because compliance ran the assessment alone.
  5. No bias testing: writing about bias without testing for it. Run actual quantitative tests with documented methodology.

Cross-references