voluntaryU.S. National Institute of Standards and Technologyv1.0

NIST AI Risk Management Framework (AI RMF 1.0)

Overview

The NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0) is a voluntary framework released by the U.S. National Institute of Standards and Technology on January 26, 2023, intended to help organizations design, develop, deploy, and use AI systems in a manner that manages risks to individuals, organizations, and society. The framework is built around four core functions: - **GOVERN** — establish a culture of risk management with policies, processes, accountability structures, and oversight - **MAP** — identify the context, intended uses, stakeholders, and risks of an AI system - **MEASURE** — assess, analyze, and track AI risks and impacts using qualitative and quantitative methods - **MANAGE** — allocate risk resources and treat identified risks based on assessed impact NIST also released the **Generative AI Profile (NIST AI 600-1)** in July 2024, which provides specific guidance for the unique risks of generative AI systems, including confabulation, harmful biases, intellectual property issues, and value chain risks. While the RMF itself is non-binding, it is widely referenced in U.S. state AI laws, federal procurement requirements, and emerging international AI policy. It is not directly certifiable; ISO/IEC 42001 provides a complementary certifiable management-system standard.

Core controls / obligations

Mapped to state laws

Common controls in NIST AI RMF that satisfy or overlap with US state AI law obligations.

Sources

Last verified: April 25, 2026

We may receive referral commissions from recommended compliance tools. Recommendations are based on product fit and not on commission size. Links marked “partner link” include a tracked redirect.