NIST AI RMF Playbook — What It Is, How to Use It
Last verified: 2026-04-30 — primary source: NIST AI Resource Center (airc.nist.gov).
The NIST AI Risk Management Framework Playbook is the U.S. National Institute of Standards and Technology's official companion resource to NIST AI RMF 1.0. The Playbook does not replace the framework — it operationalizes it, providing concrete suggested actions, references, and informative documentation pointers for every subcategory of every function.
NIST publishes and maintains the Playbook on the AI Resource Center (AIRC) at airc.nist.gov/AI_RMF_Knowledge_Base/Playbook (retrieved 2026-04-30). It is voluntary, free of charge, and maintained as a living resource separate from the underlying framework PDF (NIST AI 100-1).
What the Playbook actually contains
The AI RMF itself (NIST AI 100-1, January 26, 2023) defines four core functions — GOVERN, MAP, MEASURE, MANAGE — and a structured set of categories and subcategories underneath each function. The framework PDF tells you what outcomes to achieve. The Playbook tells you how to approach them.
For each subcategory, the Playbook supplies:
- Suggested actions — practical, role-relevant steps an organization might take to satisfy that subcategory's outcome. Suggestions are written as prompts for the team rather than mandatory controls.
- Transparency and documentation prompts — questions the organization should be able to answer about the system at this stage of the lifecycle.
- References — pointers to academic, standards-body, and industry sources that informed NIST's thinking. References include ISO/IEC standards, IEEE guidance, peer-reviewed literature, and federal-agency outputs.
The Playbook is not a checklist. It is a structured menu of options. NIST is explicit that organizations should adopt only the suggestions that fit their risk profile, sector, and operational reality.
What the Playbook is not
Three common misconceptions are worth correcting up front:
- It is not certifiable. There is no "NIST AI RMF certification" issued by NIST itself. NIST does not certify, accredit, or audit conformance with the AI RMF. Organizations that need a certifiable management-system standard should use ISO/IEC 42001:2023, which is certifiable through accredited bodies.
- It is not a replacement for sector regulation. Following the Playbook does not exempt an organization from the Colorado AI Act, Texas TRAIGA, NYC Local Law 144, HIPAA, FCRA, EEOC, or any other binding regime. The Playbook is voluntary guidance that supports compliance with those regimes — it does not substitute for them.
- It is not static. NIST updates the Playbook on the AIRC website without re-issuing the AI RMF PDF. Organizations relying on a snapshot should re-check the AIRC at least annually.
NIST AI RMF certification — the honest answer
The term "NIST AI RMF certification" appears in several forms in the market. Compliance teams should know the distinction:
- Issued by NIST itself: none. NIST does not run a certification program for the AI RMF.
- Issued by third-party training providers: several commercial providers offer courses and credentials such as "NIST AI RMF 1.0 Architect." These are private certifications attesting that an individual has completed the provider's training; they are not endorsed or audited by NIST. They are useful as professional development; they are not a substitute for organizational conformance.
- For the organization: pursue ISO/IEC 42001 certification through an accredited body, optionally with a NIST AI RMF crosswalk in the documentation.
Generative AI Profile — the GenAI extension
NIST released the Generative AI Profile (NIST AI 600-1) on July 26, 2024, as a profile of the AI RMF tailored for generative AI risks. The Profile keeps the four-function structure but adds GenAI-specific subcategory guidance for confabulation (hallucination), data privacy, dangerous or violent recommendations, environmental harms, harmful bias, human-AI configuration, information integrity, information security, intellectual property, obscene/degrading/abusive content, toxicity, and value chain and component integration risks.
The Playbook on AIRC has been progressively extended with GenAI-Profile-specific suggested actions following the 600-1 release.
How to use the Playbook in practice
For an organization standing up an AI risk program for the first time, a defensible workflow is:
- Read AI RMF 1.0 (NIST AI 100-1) end-to-end. The framework PDF is the authoritative source for what each function and subcategory means. The Playbook assumes you already understand the framework structure.
- Choose your scope. Decide which AI systems are in scope (start with consumer-affecting and high-impact systems first — these are also the systems most likely to fall under the Colorado AI Act high-risk definition).
- Walk through the GOVERN function first. Most organizations underinvest in governance. GOVERN subcategories address policy, accountability, oversight, and culture — the foundation that makes MAP/MEASURE/MANAGE meaningful.
- For each in-scope subcategory, review the Playbook's suggested actions. Pick the ones that fit your maturity, sector, and risk appetite. Document why others were rejected. The audit trail is itself a control.
- Cross-reference with state-law obligations. For each adopted action, note which state-law requirement it supports. Examples: Playbook MEASURE 2.11 actions on bias testing → satisfy NYC Local Law 144 bias audit underlying methodology; MAP 5.1 actions on impact assessment → support Colorado AI Act § 6-1-1703(3) annual impact assessment; GOVERN actions on accountability and policy → preposition deployer duties for likely 2027 reintroduction of Virginia HB 2094-style bills and provide a defensible due-care baseline in jurisdictions that regulate AI through narrow topic-specific statutes rather than a comprehensive framework, such as Florida's HB 919 + HB 757 deepfake regime.
- Refresh annually. The Playbook is living guidance and the GenAI Profile content is added incrementally. A scheduled annual re-review against the current AIRC version keeps your control library current.
How the Playbook compares to alternatives
| Resource | Maintained by | Format | Certifiable | Cost |
|---|---|---|---|---|
| NIST AI RMF Playbook | NIST | Web (AIRC) | No | Free |
| NIST AI RMF 1.0 (NIST AI 100-1) | NIST | No | Free | |
| NIST AI 600-1 (GenAI Profile) | NIST | No | Free | |
| ISO/IEC 42001:2023 | ISO | PDF (paid) | Yes (via accredited bodies) | ~CHF 173 standard + audit fees |
| ISO/IEC 23894 (AI Risk Management) | ISO | PDF (paid) | No | ~CHF 173 |
| EU AI Act harmonised standards | CEN-CENELEC | PDFs (paid) | Conformity assessment | Variable |
For most U.S.-only programs, NIST AI RMF + Playbook is the operational anchor. For multi-region compliance (EU, UK, multi-state) where a certificate is useful, layer ISO/IEC 42001 on top — the two are complementary, not competing.
Where to get the Playbook PDF
The Playbook itself is not distributed as a single PDF. NIST hosts it as an interactive web resource on the AIRC because the content is updated continuously. Organizations that need an offline snapshot typically:
- Print or save individual function pages (GOVERN / MAP / MEASURE / MANAGE) from
airc.nist.govto PDF, dated, and archive them in their compliance file - Pair the snapshot with the official AI RMF 1.0 PDF (
https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf) and the Generative AI Profile PDF (https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf) — both are stable, version-locked publications
The two PDFs plus a dated AIRC snapshot are the canonical reference set.
Frequently asked questions
What is the NIST AI RMF Playbook?
The Playbook is NIST's official companion resource to AI RMF 1.0 that supplies suggested actions, transparency-and-documentation prompts, and references for each subcategory of the framework's four functions (GOVERN, MAP, MEASURE, MANAGE). It is voluntary, free, and hosted on the NIST AI Resource Center.
Where is the NIST AI RMF Playbook PDF?
There is no official single PDF. The Playbook lives as a continuously updated web resource at airc.nist.gov/AI_RMF_Knowledge_Base/Playbook. The companion AI RMF 1.0 framework is published as PDF NIST.AI.100-1.pdf, and the GenAI Profile is NIST.AI.600-1.pdf. Most teams archive a dated snapshot of relevant AIRC pages plus the two stable PDFs.
Is there a NIST AI RMF certification?
Not from NIST. NIST does not certify organizations or products against the AI RMF. Some private training providers issue "NIST AI RMF 1.0 Architect"-style credentials for individuals — these are useful as professional development but are not NIST-endorsed and do not certify organizational conformance. For organizational certification, ISO/IEC 42001 is the certifiable AI management-system standard.
How does the Playbook differ from AI RMF 1.0 itself?
AI RMF 1.0 (NIST AI 100-1, January 26, 2023) defines the four functions, categories, and subcategory outcomes — the what. The Playbook supplies suggested actions and references for each subcategory — the how. The framework is stable and version-locked; the Playbook is living guidance.
Does adopting the Playbook satisfy the Colorado AI Act?
Not automatically. The Playbook is voluntary guidance and Colorado AI Act § 6-1-1703 imposes binding obligations on deployers of high-risk AI. However, Playbook suggested actions for MAP and MEASURE substantially overlap with Colorado's annual impact-assessment contents. Adopting Playbook actions and documenting which ones address each Colorado obligation is a defensible compliance approach — but the assessment, disclosure, and notification duties under § 6-1-1703 are still the binding floor.
Does Playbook adoption help with NYC Local Law 144 bias audits?
Playbook MEASURE-function suggested actions on bias evaluation align with the methodology that NYC LL 144 audits assume — but LL 144 specifically requires an independent bias audit by a third party, with public summary and candidate notice. Internal Playbook-driven testing does not substitute. See the NYC Local Law 144 detail for the audit and disclosure requirements.
What's the relationship between the Playbook and the GenAI Profile?
The GenAI Profile (NIST AI 600-1, July 26, 2024) is itself a profile of the AI RMF tailored to generative-AI risks. The Playbook's web resource has been extended with GenAI-Profile-specific suggested actions. For generative-AI deployments, both are required reading — the Profile defines GenAI-specific subcategory outcomes and the Playbook supplies the corresponding suggested actions.
Cross-references
- NIST AI RMF 1.0 detail — framework summary, controls, and crosswalk to state laws
- ISO/IEC 42001 — the certifiable alternative
- Federal vs state AI law — how voluntary frameworks interact with binding state laws
- AI impact assessment template — practical template that draws on Playbook MAP actions
- Colorado AI Act — § 6-1-1703(3) impact-assessment regime that Playbook actions help satisfy