Javascript is required
logo-dastralogo-dastra

Understanding the NIST AI RMF (AI Risk Management Framework)

Understanding the NIST AI RMF (AI Risk Management Framework)
Leïla Sayssa
Leïla Sayssa
24 March 2026·9 minutes read time

What is the NIST AI RMF ?

The National Institute of Standards and Technology (NIST) has developed a comprehensive Artificial Intelligence Risk Management Framework (AI RMF 1.0) to assist organisations in the responsible design, development, and deployment of AI technologies.

This voluntary guide distinguishes AI-specific challenges (such as model opacity and data drift) from traditional software risks, emphasizing that trustworthiness is a multi-faceted concept involving safety, fairness, and transparency.

The document is architected around a four-function Core consisting of Govern, Map, Measure, and Manage, which provides a structured methodology for identifying and mitigating potential harms to individuals and society. Ultimately, the framework functions as a living document intended to foster a culture of risk awareness while promoting innovation and public trust in an evolving technological landscape.

The AI RMF Core is organized into four high-level functions: GOVERN, MAP, MEASURE, and MANAGE. These functions are designed to help organisations operationalise the management of AI risks throughout the system's lifecycle.

The NIST AI RMF Playbook provides a set of recommended actions designed to support the implementation of the outcomes established in the AI RMF. Organizations may choose to adopt those that are relevant to their particular context.

Read the playbook here.

What are the main attributes of the AI RMF?

The NIST AI Risk Management Framework (AI RMF) was developed based on ten key attributes designed to guide its creation and ensure its effectiveness across diverse sectors (Appendix D of the AI RMF). These attributes specify that the AI RMF strives to:

  • Be risk-based, resource-efficient, pro-innovation, and voluntary: It focuses on managing risks without being overly burdensome or stifling innovation.
  • Be consensus-driven and transparent: It is developed and updated through an open process where all stakeholders have the opportunity to contribute.
  • Use clear and plain language: The framework is designed to be understandable to a broad audience, including non-professionals and senior executives, while remaining technically deep enough for practitioners.
  • Provide a common language and understanding: it offers a shared taxonomy, terminology, and definitions for managing AI risks.
  • Be easily usable and adaptable: It is intended to be intuitive and fit well within an organisation's existing broader risk management strategies.
  • Be universally applicable: The framework is designed to be useful across a wide range of perspectives, sectors, and technology domains.
  • Be outcome-focused and non-prescriptive: Rather than providing one-size-fits-all requirements, it offers a catalog of desired outcomes and approaches.
  • Foster awareness of existing standards: It takes advantage of existing best practices and methodologies while highlighting where additional resources are needed.
  • Be law- and regulation-agnostic: It supports an organisation's ability to operate under various domestic and international legal or regulatory regimes.
  • Be a living document: The AI RMF is intended to be regularly updated as technology, understanding, and stakeholder experiences evolve.

What are the four core functions of the AI RMF?

1. GOVERN

The GOVERN function is a cross-cutting requirement that informs and is infused throughout the other three functions. It focuses on:

  • Cultivating a risk management culture: It establishes an organisational environment where risk is anticipated and managed proactively.
  • Establishing policies and accountability: It outlines the processes, legal/regulatory requirements, and organisational schemes needed to manage risks, including defining clear roles and responsibilities.
  • Workforce diversity and training: It prioritises diversity, equity, and inclusion in the risk management process and ensures personnel are trained to perform their duties.

2. MAP

The MAP function is used to establish the context needed to frame risks related to an AI system. Key activities include:

  • Identifying intended purposes and settings: Understanding the specific goals, beneficial uses, and the environments where the AI will be deployed.
  • Categorising the AI system: Defining the specific tasks (e.g., generative models, classifiers) and identifying the system's knowledge limits.
  • Characterising impacts: Identifying the likelihood and magnitude of potential harms to individuals, groups, society, and the environment.
  • Checking assumptions: This function allows organisations to verify if their initial assumptions about the AI's use cases remain valid.

3. MEASURE

The MEASURE function employs quantitative and qualitative tools to analyze, assess, and monitor identified AI risks. This includes:

  • Evaluating trustworthy characteristics: Testing the system for validity, reliability, safety, security, fairness, and privacy-enhancement.
  • Rigorous testing (TEVV): Implementing test, evaluation, verification, and validation processes, including comparisons against performance benchmarks.
  • Tracking risks over time: Establishing mechanisms to monitor existing, unanticipated, and emergent risks while the system is in production.

4. MANAGE

The MANAGE function involves allocating resources to the risks that have been mapped and measured. It focuses on:

  • Risk treatment: Prioritising and acting upon risks based on their projected impact. Response options include mitigating, transferring, avoiding, or accepting the risk.
  • Maximising benefits and minimising harm: Implementing strategies to sustain the value of the AI system while reducing the likelihood of failures.
  • Incident response and recovery: Creating plans to respond to and recover from incidents, including mechanisms to deactivate or disengage systems that perform inconsistently with their intended use.

What are the seven characteristics of a trustworthy AI system?

The NIST AI Risk Management Framework identifies seven key characteristics that contribute to the trustworthiness of an AI system. These characteristics are socio-technical attributes, meaning they are influenced by both technical design and the social context in which the system is used.

The seven characteristics are:

  1. Valid and reliable: Validity is the confirmation that the system's requirements for its specific intended use are fulfilled, while reliability is its ability to perform without failure under given conditions over time. This characteristic includes accuracy (how close results are to true values) and robustness (the ability to maintain performance under varied circumstances).
  2. Safe: AI systems should not, under defined conditions, lead to a state that endangers human life, health, property, or the environment. Safety is improved through responsible design, clear information for users, and the ability to intervene or shut down a system if it deviates from expected functionality.
  3. Secure and resilient: Resilience is the ability of a system to withstand unexpected adverse events or changes in its environment. Security encompasses resilience but also includes protocols to protect against and recover from attacks, such as data poisoning or unauthorized access.
  4. Accountable and transparent: Transparency involves making information about an AI system and its outputs available to those interacting with it. Accountability relates to the responsibility for the system's outcomes and depends upon transparency to be effective.
  5. Explainable and interpretable: Explainability refers to describing the internal mechanisms of how an AI system works, while interpretability refers to the meaning and context of the system's output. These help users understand "how" and "why" a specific decision or recommendation was made.
  6. Privacy-enhanced: This characteristic relates to safeguarding human autonomy, identity, and dignity. It involves following norms like anonymity and confidentiality, and utilizing privacy-enhancing technologies (PETs) to prevent the unauthorized identification of individuals.
  7. Fair, with harmful bias managed: Fairness involves addressing concerns for equality and equity. This requires managing various forms of bias, including systemic bias (in datasets or organizational norms), computational bias (statistical errors), and human-cognitive bias (how individuals perceive information).

For a system to be truly trustworthy, these characteristics must be balanced based on the specific context of use, as they often involve tradeoffs (for example, a more private system might lose some predictive accuracy).

How can an organisation implement an AI RMF Profile?

An organisation can implement an AI RMF Profile by tailoring the Framework's functions, categories, and subcategories to a specific setting or application based on its unique requirements, risk tolerance, and available resources.

The implementation process generally involves the following steps and considerations:

1. Identify the type of profile needed

Organisations can develop different types of profiles depending on their goals:

  • Use-case profiles: These are designed for specific applications, such as an AI RMF profile for hiring or fair housing.
  • Temporal profiles: These help track progress over time. A current profile describes the organisation's existing AI risk management activities, while a target Profile outlines the desired outcomes needed to meet specific risk management goals.
  • Cross-sectoral profiles: these cover risks for models or business processes used across multiple sectors, such as the acquisition of large language models or cloud-based services.

2. Conduct a gap analysis

By comparing a current profile against a target profile, an organisation can identify specific gaps in its risk management objectives. This comparison helps the organisation understand which categories or subcategories of the AI RMF Core need more attention or resources.

3. Develop and prioritise an action plan

Once gaps are identified, the organisation can:

  • Create action plans to address those gaps and fulfill the outcomes of specific subcategories.
  • Prioritize mitigation efforts based on the organisation's specific needs and established risk management processes.
  • Gauge resource needs, such as staffing and funding, to achieve their target risk management goals in a cost-effective manner.

4. Maintain flexibility

The AI RMF does not prescribe specific templates for these profiles. This allows organisations the flexibility to implement the framework in a way that best aligns with their internal goals, legal or regulatory requirements, and industry best practices. Profiles also allow organisations to compare their risk management approaches with those of other entities.


Subscribe to our newsletter

We'll send you occasional emails to keep you informed about our latest news and updates to our solution

* You can unsubscribe at any time using the link provided in each newsletter.