In January 2023, the National Institute of Standards and Technology published the AI Risk Management Framework (AI RMF 1.0). It was not a law. It was not a regulation. It was a voluntary framework — and it has quietly become the most referenced AI governance document in the United States.
Why? Because in a country that has chosen not to pass comprehensive federal AI legislation, the NIST AI RMF fills the vacuum. Federal agencies reference it. State regulators cite it. The Colorado AI Act (the first comprehensive state AI law, effective 2026) explicitly aligns with it. Corporate boards, procurement teams, and legal departments use it as the benchmark for “reasonable” AI governance.
If your organization uses AI in any capacity, you need to understand this framework.
À retenir
- The NIST AI RMF is the de facto US standard for AI risk management — voluntary but increasingly expected
- It is built on four core functions: Govern, Map, Measure, and Manage
- The framework is risk-based and technology-neutral — applicable to any AI system or use case
- It maps closely to the EU AI Act, making it valuable for organizations operating across both jurisdictions
What the NIST AI RMF is (and isn’t)
The NIST AI Risk Management Framework is a voluntary, rights-preserving framework designed to help organizations manage AI risks throughout the AI lifecycle. It was developed through extensive public consultation and is technology-neutral — it applies to machine learning, generative AI, computer vision, robotic process automation, and any other AI technology.
What it is not:
- It is not a law or regulation (though it is referenced by regulations)
- It is not a certification standard (unlike ISO 42001)
- It is not prescriptive — it does not tell you exactly what to do, but rather what to consider
- It is not limited to high-risk AI — it applies to all AI systems proportionate to their risk level
78%
of US organizations managing AI risk reference the NIST AI RMF as their primary governance framework
Source : Deloitte US AI Governance Survey, 2025
The four core functions
The NIST AI RMF is organized around four core functions, each containing categories and subcategories that define specific outcomes.
1. Govern
The Govern function is the foundation. It establishes the organizational structures, policies, and culture needed for effective AI risk management. Unlike the other three functions (which are technical), Govern is about people and processes.
Key outcomes:
- Policies and procedures. Your organization has an AI acceptable use policy and documented AI risk management procedures.
- Roles and responsibilities. Accountability is assigned — AI system owners, risk assessors, compliance officers, and governance committees are defined.
- Risk culture. Leadership actively promotes a culture where AI risks are surfaced, reported, and addressed — not hidden.
- Workforce competency. Staff at all levels have appropriate AI literacy for their roles. This maps directly to the EU AI Act’s Article 4 requirement and is where AI training becomes a governance function.
- Stakeholder engagement. Affected communities and end users have input into AI design and deployment decisions.
- Third-party risk. AI risks from vendors, partners, and supply chain participants are identified and managed.
2. Map
The Map function is about understanding context. Before you can measure or manage AI risk, you need to map the territory — what AI systems exist, how they are used, who they affect, and what could go wrong.
Key outcomes:
- AI inventory. A complete catalogue of all AI systems in use, including shadow AI discovery mechanisms.
- Purpose and context. Each system’s intended use, operational environment, and affected populations are documented.
- Risk identification. Potential harms — to individuals, groups, communities, organizations, and ecosystems — are identified before deployment.
- Bias mapping. Sources of bias in data, models, and human decision-making are identified and documented.
- Benefits assessment. Expected benefits are documented alongside risks, enabling proportionate risk management.
3. Measure
The Measure function quantifies the risks identified in Map. It establishes metrics, methods, and benchmarks for evaluating AI system performance, reliability, and safety.
Key outcomes:
- Performance metrics. Accuracy, reliability, robustness, and other technical performance measures are defined and tracked.
- Bias testing. Regular testing for disparate impact across protected classes — critical for compliance with EEOC guidance and fair lending requirements in banking.
- Transparency assessment. The degree to which the AI system’s decision-making process can be explained and audited.
- Security evaluation. Vulnerability assessments, adversarial testing, and data protection reviews.
- Human factors. How users interact with the AI system, including the risk of over-reliance on AI outputs.
4. Manage
The Manage function closes the loop. It defines how identified and measured risks are mitigated, monitored, and communicated over the AI system’s lifecycle.
Key outcomes:
- Risk treatment. Specific controls and mitigations for each identified risk — technical, procedural, and organizational.
- Monitoring. Continuous monitoring of AI system performance, with triggers for intervention when metrics degrade.
- Incident response. Documented procedures for responding to AI failures, breaches, or harmful outcomes.
- Communication. Risk information is shared with relevant stakeholders — decision-makers, users, and affected parties.
- Decommissioning. Processes for retiring AI systems when they no longer meet performance or risk requirements.
The four functions are not sequential. They operate concurrently and continuously. Govern underpins everything. Map, Measure, and Manage cycle iteratively throughout the AI lifecycle.
How the NIST AI RMF connects to the EU AI Act
For organizations operating in both the US and EU, the NIST AI RMF and the EU AI Act are complementary rather than conflicting. Here is how they align:
| NIST AI RMF function | EU AI Act equivalent |
|---|---|
| Govern (policies, roles, culture) | Article 4 (AI literacy), Article 26 (deployer obligations) |
| Map (inventory, context, risk ID) | Article 6 (risk classification), Annex III (high-risk list) |
| Measure (metrics, bias testing) | Article 9 (risk management), Article 10 (data governance) |
| Manage (controls, monitoring) | Article 13 (transparency), Article 14 (human oversight) |
The key difference: the EU AI Act is legally binding with specific penalties (up to 35 million euros or 7% of global revenue). The NIST AI RMF is voluntary. But implementing the NIST framework puts you well on your way to EU AI Act compliance — and vice versa.
4x
more likely to achieve EU AI Act compliance for organizations that had already implemented the NIST AI RMF
Source : PwC AI Governance Benchmarking, 2025
Implementation roadmap
Phase 1: Govern (weeks 1-4)
Establish your governance foundation:
- Appoint an AI governance lead or committee
- Draft your AI acceptable use policy
- Define roles: AI system owners, risk assessors, compliance leads
- Launch an AI training program covering AI literacy for all staff
Phase 2: Map (weeks 5-8)
Build your AI inventory:
- Catalogue all AI systems in use (including vendor tools, SaaS features, and internal builds)
- Run shadow AI discovery across the organization
- Document each system’s purpose, users, data inputs, and affected populations
- Identify potential risks and harms for each system
Phase 3: Measure (weeks 9-12)
Define and execute measurements:
- Establish performance baselines for each AI system
- Conduct bias testing, particularly for systems affecting hiring, lending, healthcare, or other consequential decisions
- Assess transparency and explainability levels
- Evaluate security posture
Phase 4: Manage (ongoing)
Operationalize risk management:
- Implement controls for identified risks
- Establish continuous monitoring and alerting
- Build incident response playbooks
- Schedule regular governance reviews (quarterly minimum)
- Document everything for regulatory and audit purposes
Start with your highest-risk AI systems — anything that affects employment decisions, credit decisions, healthcare, insurance, or law enforcement. These are where regulators are focusing and where the NIST AI RMF delivers the most immediate value.
The NIST AI RMF in the US regulatory landscape
The NIST AI RMF does not exist in isolation. It sits at the center of an emerging US regulatory ecosystem:
- Colorado AI Act (2026): The first comprehensive state AI law explicitly references NIST AI RMF concepts for high-risk AI risk management.
- FTC enforcement: The FTC uses “reasonable” risk management as a standard — the NIST AI RMF defines what reasonable looks like.
- SEC guidance: For public companies, the NIST AI RMF provides a framework for the AI risk disclosures the SEC increasingly expects.
- Federal procurement: Executive Order 14110 (2023) directed federal agencies to use the NIST AI RMF. If you sell to the US government, compliance is effectively required.
- Sector regulators: The OCC, FDIC, Fed, HHS, and other agencies are aligning their AI guidance with NIST AI RMF principles.
Implement the NIST AI RMF with Brain
The Govern function’s workforce competency requirement is where most organizations struggle. Brain delivers practical AI training that builds AI literacy across your entire workforce — the foundation the NIST AI RMF demands. Role-specific modules on risk awareness, responsible AI use, data handling, and AI governance compliance. Tracked and documented for audit purposes.
Explore our plans to get started.
Related articles
AI Governance Framework: EU AI Act + NIST Guide
Build an AI governance framework that meets EU AI Act and NIST AI RMF requirements. Step-by-step implementation for organisations of all sizes.
AI Governance Framework: Checklist + Template (ISO 42001)
Build an AI governance framework step by step. Includes checklist, template, EU AI Act alignment and ISO 42001 integration guide.
AI Regulation UK: DSIT vs EU AI Act (2026)
UK AI regulation explained — DSIT framework, FCA/ICO/Ofcom roles, and how it compares to the EU AI Act. What UK businesses must do now.