How to Implement the NIST AI Risk Management Framework: A Complete Guide

Written by

Dogan Akbulut

Published on

No headings found on page

How to Implement the NIST AI Risk Management Framework: A Complete Guide for 2026

Artificial intelligence introduces a category of organizational risk that conventional information security frameworks were not designed to address. The risks that arise when an organization deploys a machine learning model — biased outputs, opaque decision-making, adversarial manipulation, and unintended harm to individuals — are fundamentally different in character from the risks of a misconfigured firewall or an unpatched server. They require a different vocabulary, a different assessment methodology, and a different governance structure.

The NIST AI Risk Management Framework, published by the National Institute of Standards and Technology in January 2023, is the most widely adopted voluntary framework for managing AI-specific risk. It is referenced in US federal AI executive orders, cited in EU AI Act implementation guidance, recognized by financial regulators and healthcare authorities, and increasingly required or strongly recommended by enterprise procurement teams as a condition of doing business with AI vendors.

This guide explains what the NIST AI RMF is, how its four core functions work in practice, how to build an AI risk profile using the framework, what the AI RMF Playbook provides, and how the framework maps to ISO 42001 and the EU AI Act for organizations navigating multiple AI governance requirements simultaneously.

What the NIST AI RMF Is — and What It Is Not

The NIST AI RMF is a voluntary framework. Unlike ISO 27001, which is an auditable standard against which organizations can be certified, the AI RMF does not result in a third-party credential. Unlike GDPR or CCPA, it is not a law with penalties for non-compliance. It is a structured approach to thinking about, assessing, and managing AI risk that organizations can adopt in whole or in part, adapt to their specific context, and use as the operational foundation for more formal AI governance commitments.

This voluntary nature is both a strength and a challenge. The strength is flexibility — the framework can be applied to a narrow AI use case or an entire AI product portfolio, implemented by a two-person startup or a global enterprise, and combined with other frameworks without creating conflicts. The challenge is that without a certification mechanism, organizations must define for themselves what implementation looks like and what evidence demonstrates that the framework is operating effectively.

The NIST AI RMF is built around a recognition that AI risk is not purely technical. Risk arises from how AI systems are designed, deployed, and governed; who they affect; in what context they operate; and how organizations govern their development and use. The framework addresses all of these dimensions rather than focusing exclusively on cybersecurity or data protection.

The framework is structured around two core documents. The AI RMF Core defines the four functions — Govern, Map, Measure, and Manage — and the categories and subcategories within each. The AI RMF Playbook provides suggested actions for implementing each subcategory in practice. Together, they give organizations both a conceptual structure and operational guidance.

The Four Core Functions of the NIST AI RMF

The AI RMF Core organizes AI risk management activity into four functions. Three of them — Map, Measure, and Manage — apply to individual AI systems throughout their lifecycle. The fourth — Govern — applies to the organization as a whole and creates the conditions for the other three functions to operate effectively.

Govern: Building the Organizational Foundation

Govern is the function that most closely resembles traditional information security governance. It addresses the policies, processes, roles, accountability structures, and organizational culture that enable responsible AI development and deployment. Without a functioning governance structure, Map, Measure, and Manage become isolated technical exercises disconnected from organizational decision-making.

Govern 1 requires that organizational policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are in place, transparent, and implemented effectively. In practice, this means that AI risk management must be a formal organizational commitment rather than an informal practice. There must be documented policies governing how AI systems are developed and deployed, how risks are assessed, who has authority to approve deployment decisions, and how concerns about AI system behavior are escalated and resolved.

Govern 2 addresses accountability. Organizations must establish clear roles and responsibilities for AI risk management across the AI lifecycle — from design and development through deployment, monitoring, and decommissioning. This means identifying who is responsible for AI risk assessment, who reviews and approves risk treatment decisions, who monitors deployed systems for performance and safety, and who has authority to halt or modify an AI system that is behaving problematically.

Govern 3 requires that organizational teams are aware of risks and that a culture of risk awareness exists. This is not merely a training requirement — it requires that engineers, product managers, data scientists, legal teams, and business owners understand the risk implications of decisions they make throughout the AI lifecycle, not only at formal review gates.

Govern 4 and Govern 5 address organizational teams and diversity. The AI RMF emphasizes that teams working on AI systems should include individuals with diverse backgrounds, perspectives, and disciplines — not only technical staff, but also domain experts, ethicists, affected community representatives where appropriate, and individuals with expertise in the contexts where AI systems will be deployed.

Govern 6 addresses policies and procedures for AI risk and benefits across the organization and with external parties. This includes supply chain considerations — how third-party AI components, foundation models accessed via API, and data providers are evaluated and governed.

Map: Understanding Context and Risk

The Map function is where AI risk management becomes specific to individual AI systems. Its purpose is to establish the context in which an AI system operates, identify the stakeholders it affects, understand the potential harms it could cause, and create the foundational understanding on which Measure and Manage activities depend.

Map 1 requires that the context in which the AI system is deployed is understood. This goes beyond technical specifications. Context includes the intended use case, the organizational setting in which the system operates, the characteristics of the population it serves, the potential for use beyond its intended purpose, and the regulatory and legal environment governing its deployment. An AI system that recommends loan amounts operates in a fundamentally different context from one that recommends content on a social platform, and the risk profile of each must be built from an understanding of that context.

Map 2 focuses on categorizing AI risks and impacts. The framework asks organizations to identify the potential harms their AI system could cause — to individuals, groups, organizations, and society — and to categorize those harms by type, likelihood, and magnitude. Harms in the AI RMF are grouped into physical, psychological, financial, societal, and reputational categories. An AI system used in hiring decisions carries the potential for financial and psychological harm that must be assessed explicitly. An AI system used in medical diagnosis carries the potential for physical harm and requires a different level of rigor.

Map 3 requires that AI risks and benefits be understood at the system and organizational level. Benefits must be assessed alongside risks — not as a counterweight that justifies harm, but as a dimension of the overall picture that informs proportionate risk management decisions. An AI system that carries meaningful risk potential may still be appropriate to deploy if the benefits are substantial and the risks are managed effectively.

Maps 4 and 5 address the scientific and empirical basis for risk assessment. The framework requires that risk assessments draw on established research, domain expertise, and empirical evidence where available, and that uncertainty be explicitly acknowledged where the evidence base is limited. AI risk assessment that relies exclusively on internal assumptions without reference to the body of research on AI failure modes, bias, and adversarial vulnerability is insufficient under the framework's expectations.

Measure: Assessing AI Risks Rigorously

The Measure function provides the analytical methods and metrics that allow organizations to assess AI risks in a structured, evidence-based, and repeatable way. It moves AI risk management from qualitative judgment to quantified, documented assessment.

Measure 1 establishes the metrics and methodologies for assessing AI risks. Before deploying an AI system, organizations must define what they will measure, how they will measure it, what thresholds indicate acceptable risk, and what would trigger a reassessment or a halt to deployment. This requires deciding in advance — not retroactively — what constitutes a failure and what evidence would demonstrate that a failure has occurred.

Measure 2 is the most operationally intensive subcategory in the framework. It requires that AI risks be evaluated, documented, and understood at the system level. This includes evaluating the AI system for bias and fairness across relevant demographic groups, assessing robustness to adversarial inputs, evaluating performance across the distribution of inputs the system is likely to encounter during deployment, and explicitly documenting the system's limitations. Most AI teams conduct some version of this evaluation — the AI RMF requires that it be systematic, documented, and linked to explicit risk conclusions.

Measure 2.5 specifically addresses the AI system's explainability and interpretability. Organizations must assess the extent to which the system's outputs can be explained — to operators, affected individuals, and regulators — and consider whether the level of explainability is appropriate for the system's operating context. A credit scoring model operating in a regulated context requires a higher level of explainability than a content recommendation system.

Measure 3 requires that identified risks be tracked over time. This is the bridge between the Measure and Manage functions. Risk assessments conducted before deployment must be revisited as the system operates in production and as the operating environment changes. AI systems that were assessed as low-risk at launch may become higher-risk as they encounter real-world data distributions that differ from training data, as the user population changes, or as adversarial techniques targeting the system's architecture evolve.

Measure 4 addresses the feedback mechanisms through which new risk information is incorporated into ongoing assessments. Organizations must have processes for collecting feedback from users, operators, and affected individuals; for monitoring system outputs for unexpected behavior; and for escalating concerns to appropriate decision-makers.

Manage: Treating and Monitoring AI Risks

The Manage function is where risk assessment conclusions are translated into operational decisions, and ongoing monitoring ensures those decisions remain appropriate as the system and its environment evolve.

Manage 1 requires that identified risks be prioritized and treated. Treatment options in the AI RMF parallel those in conventional risk management: mitigate the risk through technical or operational controls, accept the risk with documented justification, transfer the risk through contractual or insurance mechanisms, or avoid the risk by not deploying the system or by modifying its scope. Every identified risk must have a documented treatment decision and an owner responsible for implementing and monitoring that treatment.

Manage 2 addresses the treatment of risks that emerge after deployment. AI systems behave differently in production than they do in testing. Manage 2 requires that organizations have mechanisms for detecting unexpected behavior, for assessing its risk implications, and for responding appropriately — which may include implementing technical controls, modifying the system's deployment scope, communicating transparently with affected users, or decommissioning the system if risks cannot be adequately managed.

Manage 3 requires that risks from third-party AI components be managed. For organizations that build on top of foundation models, use open-source model weights, or integrate AI services from third-party providers, Manage 3 is particularly important. The organization deploying an AI system is responsible for managing its risks, regardless of whether the underlying model was developed internally or sourced externally. This requires due diligence on AI suppliers, contractual protections, and ongoing monitoring of third-party model behavior.

Manage 4 addresses residual risks — risks that remain after treatment has been applied. The framework requires that residual risks be acknowledged explicitly, documented, and accepted by appropriate organizational authority. Accepting a residual risk without documented organizational authorization is not risk management. It is risk neglect.

Building an AI Risk Profile

The AI RMF introduces the concept of an AI risk profile — a structured characterization of the risks associated with a specific AI system, integrating outputs from Map, Measure, and Manage into a single reference document. An AI risk profile serves multiple purposes: it provides a consolidated view of an AI system's risk posture for organizational decision-makers, establishes a baseline against which future assessments can be compared, and serves as documentation demonstrating due diligence to customers, auditors, and regulators.

A well-constructed AI risk profile includes the context of the AI system — its intended use, deployment environment, affected populations, and regulatory context; the categories of harm identified through Map; the results of Measure activities — bias evaluations, robustness assessments, explainability assessments, and performance metrics with thresholds; the risk treatment decisions made for each identified risk and their implementation status; the residual risks that remain after treatment; the monitoring mechanisms in place and the cadence at which they operate; and the organizational owners responsible for ongoing risk management.

The AI risk profile is a living document. It should be updated when the system undergoes significant changes, when monitoring reveals unexpected behavior, when the deployment context changes, or when new research on AI risks relevant to the system's architecture or application domain becomes available.

The AI RMF Playbook

The AI RMF Playbook is a companion resource published alongside the core framework. It provides suggested actions for implementing each subcategory of the four functions — specific, operational steps that organizations can adapt to their context, rather than abstract principles that require translation into practice.

The Playbook is structured around the same function, category, and subcategory hierarchy as the core framework. For each subcategory, it provides suggested actions organized by the organizational roles most responsible for that activity — AI development teams, operators, senior leadership, legal, and compliance functions.

The Playbook is where the AI RMF becomes a practical implementation tool rather than a governance document. Organizations building their first AI risk management program should use the Playbook as a structured starting point, selecting suggested actions appropriate to their context and supplementing with additional controls where their specific risk profile requires.

NIST has also published supplementary resources, including a Glossary that standardizes AI risk management terminology, a companion document addressing AI risk in the context of generative AI, and an online informative reference tool that maps AI RMF subcategories to other frameworks, including GDPR, ISO 27001, and the EU AI Act.

How NIST AI RMF Maps to ISO 42001

ISO 42001 is the international standard for AI management systems. Published in December 2023, it provides a certifiable framework for organizations that develop, provide, or use AI systems, built on the same management system structure as ISO 27001. Where the NIST AI RMF is a flexible voluntary framework that organizations implement on their own terms, ISO 42001 is an auditable standard with a defined certification path.

The two frameworks are complementary, not competing. Organizations that rigorously implement the NIST AI RMF will find that a substantial portion of the work required for ISO 42001 certification has already been completed. The Governance function maps closely to ISO 42001's organizational and leadership requirements — Clauses 4 through 7, which address context, leadership, planning, and support. The Map and Measure functions provide the risk and impact assessment methodology required by ISO 42001 under Clause 6 and its Annex A controls. The Manage function maps to ISO 42001's operational requirements under Clause 8 and to its performance evaluation requirements under Clause 9.

The primary addition that ISO 42001 requires beyond a rigorous NIST AI RMF implementation is the formal management system infrastructure — the documented AIMS scope, the Statement of Applicability for Annex A controls, the internal audit program, the management review process, and the nonconformity and corrective action procedure. Organizations that have implemented the AI RMF with strong documentation practices can add this infrastructure without having to rebuild their underlying AI risk management approach.

For organizations pursuing both frameworks, the recommended sequence is to implement the NIST AI RMF as the operational foundation and then build the ISO 42001 management system structure on top of it. This produces a program that is both operationally rigorous — because it is grounded in the AI RMF's detailed risk management guidance — and externally certifiable, because it meets ISO 42001's structural requirements.

How NIST AI RMF Maps to the EU AI Act

The EU AI Act, which came into full effect in 2024, establishes binding requirements for AI systems deployed in the European Union. Its risk-based approach — categorizing AI systems as unacceptable risk, high risk, limited risk, or minimal risk — requires that organizations understand the regulatory classification of their AI systems and implement requirements proportionate to that classification.

The NIST AI RMF's Map function is directly useful for EU AI Act compliance. The context and risk categorization activities required by Map provide the foundational analysis needed to determine whether an AI system falls into the EU AI Act's high-risk categories — which include AI systems used in employment decisions, credit scoring, biometric identification, critical infrastructure, and certain law enforcement contexts.

For high-risk AI systems under the EU AI Act, the Act requires a conformity assessment, a risk management system, technical documentation, data governance practices, transparency, human oversight mechanisms, and post-market monitoring. Each of these requirements has a corresponding activity in the NIST AI RMF. The risk management system requirement maps to the full Map-Measure-Manage cycle. The technical documentation requirement maps to the AI risk profile. Post-market monitoring maps to Measure 3 and Manage 2.

Organizations that implement the NIST AI RMF rigorously, maintain comprehensive AI risk profiles, and have functioning governance structures are well-positioned to demonstrate EU AI Act compliance for their AI systems — particularly for the high-risk categories where the Act's requirements are most demanding.

NIST has published an informative crosswalk between the AI RMF and the EU AI Act that maps specific subcategories to specific Act requirements, providing a structured basis for gap assessment for organizations subject to both.

Implementing the NIST AI RMF in Practice

Implementing the NIST AI RMF is not a single project with a defined end state. It is an ongoing organizational capability that is built incrementally and refined as the organization's AI portfolio grows in scale and complexity.

Start with Govern. Before conducting Map, Measure, or Manage activities for individual AI systems, establish the organizational foundation. Define what counts as an AI system in your organization's context. Establish who is responsible for AI risk management. Document the policies that govern how AI systems are developed and deployed. Create the escalation paths through which AI risk concerns reach decision-makers. Without Govern, Map, Measure, and Manage, activities will produce risk assessments that sit in documents without influencing organizational decisions.

Build an AI system inventory. You cannot manage AI risks you do not know exist. Create a register of AI systems your organization develops, deploys, or procures — including AI components embedded in products, AI features added to existing platforms, and third-party AI services integrated into workflows. Assign an owner to each system. This inventory is the starting point for prioritizing Map activities.

Prioritize Map activities by risk potential. Not every AI system in your inventory requires the same depth of risk assessment. Systems that make consequential decisions about individuals, process sensitive data, have broad deployment reach, and operate in regulated contexts should be prioritized for rigorous Map and Measure activities. Lower-risk systems may warrant a lighter-touch assessment.

Make Measure activities systematic and documented. Bias evaluations, robustness tests, and performance assessments conducted informally and undocumented provide no evidentiary value in the event of a regulatory inquiry, a customer audit, or an internal review following an AI system failure. Define the methodology before testing, document the results, including unfavorable findings, and record the risk conclusions drawn from those results.

Treat management as ongoing, not one-time. Risk treatment decisions made at deployment must be revisited. Build monitoring into the operational cadence of deployed AI systems. Define the conditions that trigger reassessment. Assign responsibility for ongoing monitoring to named individuals rather than to organizational functions in the abstract.

Document everything in the AI risk profile. The AI risk profile is the primary artifact that demonstrates that your organization has implemented the NIST AI RMF for a given system. It should be complete enough that a reader unfamiliar with the system can understand its risk context, the assessments conducted, the treatment decisions made, and the residual risks accepted. It should be current enough to reflect the system's actual state in production.

The NIST AI RMF is the most mature and widely recognized voluntary AI governance framework available. Organizations that implement it rigorously — rather than treating it as a checklist to be checked off with minimal effort — build the governance infrastructure that enterprise customers, regulators, and auditors increasingly expect from AI companies operating in 2026 and beyond.

dsalta helps AI companies implement the NIST AI RMF and build the documentation, monitoring, and governance infrastructure required to satisfy enterprise procurement requirements, regulatory expectations, and ISO 42001 certification audits.

Explore more AI Compliance articles

AI Regulatory Compliance

How to Implement the NIST AI Risk Management Framework: A Complete Guide

ISO 42001: The Complete Guide to AI Management System Certification

AI Compliance 2026: Build Your Governance Framework

SOC 2, ISO 27001, and HIPAA Compliance Costs Compared

The AI Compliance Frameworks Every Organization Needs to Know

HIPAA for AI Copilots: Chatbots in Healthcare Workflows

ISO 27001 for AI Startups - LLMs, Agents, and Sensitive Training Data

Choosing the Right SOC 2 Penetration Testing Partner in 2026

EU AI Act Compliance Checklist: 7 Steps Every Business Needs

GRC Trends 2026: AI-First Platforms Are Reshaping Compliance

Protecting PHI: Navigating HIPAA Compliance with AI Automation

AI for GRC: Solving Capacity and Complexity in Risk Programs

Streamline SOC 2, ISO 27001, HIPAA & GDPR With One AI Engine

SOC 2 Continuous Compliance: How AI Replaces One-Time Audits

A Practical Guide to the EU AI Act & ISO 42001 Compliance

AI-Powered SOC 2 & HIPAA Compliance: Ditch Your Spreadsheets

SOC 2 Type 2 Audit Guide: 10 AI Controls for SaaS Teams

AI for GDPR & ISO 27001: Streamline Controls & Certification

Regulated SaaS: Agentic AI Transforming Compliance

AI Cybersecurity Compliance Checklist 2026: A Complete Guide

AI-Driven Vendor Monitoring for ISO 27001, GDPR & SOC 2

AI Compliance in 2026: From Spreadsheets to Audits

Streamline Compliance With AI: SOC 2, ISO 27001, GDPR & More

How AI Is Transforming Vendor Risk Management

Spreadsheets to AI: Achieve Compliance in Days, Not Months

AI Compliance Automation: What Works & Why It Matters

SOC 2 Controls: 20+ Real-World Examples for SaaS & AI

Achieve Audit Readiness: Streamline Compliance with AI Solutions

Autonomous Compliance Agents Are Revolutionizing Vendor Risk

Can AI Steal Stories? The Robot Rules Explained

What is an AI Audit? Complete 2026 Guide

Why AI Agents Need Compliance Too

Introducing the World's First AI-Powered Compliance Framework

AI revolutionizing - SOC2 Compliance

Stop losing deals to compliance.

Get compliant. Keep building.

Join 100s of startups who got audit-ready in days, not months.