NIST AI Risk Management Framework (AI RMF 1.0)
Overview
The NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0) is a voluntary framework released by the U.S. National Institute of Standards and Technology on January 26, 2023, intended to help organizations design, develop, deploy, and use AI systems in a manner that manages risks to individuals, organizations, and society. The framework is built around four core functions: - **GOVERN** — establish a culture of risk management with policies, processes, accountability structures, and oversight - **MAP** — identify the context, intended uses, stakeholders, and risks of an AI system - **MEASURE** — assess, analyze, and track AI risks and impacts using qualitative and quantitative methods - **MANAGE** — allocate risk resources and treat identified risks based on assessed impact NIST also released the **Generative AI Profile (NIST AI 600-1)** in July 2024, which provides specific guidance for the unique risks of generative AI systems, including confabulation, harmful biases, intellectual property issues, and value chain risks. While the RMF itself is non-binding, it is widely referenced in U.S. state AI laws, federal procurement requirements, and emerging international AI policy. It is not directly certifiable; ISO/IEC 42001 provides a complementary certifiable management-system standard.
Core controls / obligations
- governanceGOVERN 1-6
GOVERN function: establish policies, processes, structures, and accountability for AI risk management across the organization, including senior leadership oversight and a risk-based culture.
- risk assessmentMAP 1-5
MAP function: identify the context, intended uses, stakeholders, and risks of each AI system, including categorization of impacts on individuals, communities, and the organization.
- risk assessmentMEASURE 1-4
MEASURE function: assess, analyze, and monitor AI risks using both quantitative and qualitative methods, including bias evaluation, robustness testing, and explainability assessments.
- governanceMANAGE 1-4
MANAGE function: prioritize and treat identified risks, allocate resources, and implement risk response strategies including mitigation, transfer, acceptance, or avoidance.
Mapped to state laws
Common controls in NIST AI RMF that satisfy or overlap with US state AI law obligations.
- strongTransparency in Frontier Artificial Intelligence Act (TFAIA)
- weakCalifornia AI Transparency Act
- strongColorado Artificial Intelligence Act
- partialIllinois HB 3773 (AI in Employment Decisions)
- partialNYC Local Law 144 (Automated Employment Decision Tools)
- partialTexas Responsible Artificial Intelligence Governance Act (TRAIGA)
Sources
Last verified: April 25, 2026
We may receive referral commissions from recommended compliance tools. Recommendations are based on product fit and not on commission size. Links marked “partner link” include a tracked redirect.