Understanding the National Institute of Standards and Technology (NIST) AI Risk Management Framework

Reva Schwartz, Ashley Casovan, and Var Shankar

The publication of NIST’s AI RMF 1.0 in January was a major milestone for NIST and the AI community. The AI RMF was developed in an open and transparent manner and incorporated feedback from an extensive public comments process, including on three previously published drafts and from public workshops. Organizations tracking AI laws, policies, and frameworks should understand the AI RMF’s objectives, what the AI RMF does not seek to do, and how the AI RMF relates to other AI governance efforts.

What are the objectives of NIST’s AI RMF?

The AI RMF provides organizations with a guiding structure to operate within, and outcomes to aspire towards, based on their specific contexts, use cases, and skillsets. It provides a common set of AI risk concepts, terms, and resources and categorizes potential AI-related harms to people, organizations, and ecosystems. As a rights-preserving framework, the AI RMF “offers a path to minimize potential negative impacts of AI systems, such as threats to civil liberties and rights, while also providing opportunities to maximize positive impact.”

The primary audience for the AI RMF is AI actors across the lifecycle. AI actors are not limited to AI developers. Rather, they include organizations and individuals who play an active role throughout the AI lifecycle, including professionals such as data scientists, data engineers, human factors experts, modelers, domain experts, systems integrators, and system operators – but also impacted individuals and communities, general public end-users, and many others. So, the AI RMF signals to organizations that the responsibility to manage AI risks is shared across the organization.

Characteristics of Trustworthy AI systems

The AI RMF describes the characteristics of trustworthy AI systems, pictured above at a high-level. Its core also describes functions to manage AI risk (Govern, Map, Measure, Manage) with outcomes described under specific categories and subcategories. The Govern function is intended to be cross-cutting and infused across the other three functions. There is also an accompanying Playbook that provides AI RMF users with actionable suggestions for how to meet the function outcomes, with detail at the subcategory level.

AI RMF Functions

What is NIST’s AI RMF not intended to do?

Since the AI RMF is a voluntary framework, adhering to its concepts and framing is not in itself legally required. However, given that other NIST frameworks, including those for privacy and cybersecurity, constitute fundamental subject matter guidance and are used by organizations in the public, private, and nonprofit sectors around the world, the AI RMF’s concepts and framing are also likely to see wide adoption. Additionally, despite the AI RMF itself being a voluntary framework, the AI RMF aligns in many important ways with the approach of the draft European Union AI Act, as discussed further below.

The AI RMF does not provide a standard, or specific guidance, related to particular use cases, organization sizes, or sectors. Rather, it is “use-case agnostic, providing flexibility to organizations of all sizes and in all sectors and throughout society.” This approach is reflected throughout the AI RMF. For example, it notes that risk tolerance and acceptable risk levels are “highly contextual and application and use-case specific.” That said, NIST will engage with interested parties who are interested in developing “implementation profiles,” which are snapshots of how the AI RMF would be used in different industries or for different workflows. These profiles are expected to be largely developed and contributed to NIST by the broader stakeholder community. It is anticipated that these will include “use-case implementation profiles,” such as an AI RMF fair housing profile or hiring profile.

How does NIST’s AI RMF relate to AI risk management efforts by others in the AI community?

NIST’s AI RMF and accompanying materials such as the Playbook and Crosswalks point readers to other useful resources that organizations can use to complement the AI RMF. Crosswalks allow others to understand how the AI RMF fits into other guidance documents, standards, or regulatory requirements. As part of the AI RMF core, organizations seeking to manage AI risk are encouraged to consult applicable laws and regulations.

Organizations seeking to complement the AI RMF with legal, policy, industry-specific, and use-case-specific guidance should consider which sources of authority and guidance in the AI governance ecosystem may apply or be valuable.

Source TypesExamples
AI LawsDraft American Data and Privacy Protection Act
Draft EU AI Act
AI PrinciplesOECD Recommendation on AI
AI Guidance and FrameworksNIST AI RMF
OSTP Blueprint for an AI Bill of Rights
Sectoral lawsNational Traffic and Motor Vehicle Safety Act (US Department of Transportation)
Iowa Automated Driving System Law
Standards and CertificationsAI for Management Systems Standard (ISO 42001)
AI Risk Management Standard (ISO 23894)
Responsible AI Certification Program
Standard for the evaluation of autonomous products (ANSI/UL 4600)
Road vehicles functional safety (ISO 26262)
Industry best practicesGuidance from Society of Automobile Engineers (SAE)

AI Governance Ecosystem – Sample

OECD Recommendation on AI: AI RMF draws upon and adapts internationally-accepted AI principles, concepts, and definitions outlined in the OECD Recommendation on AI. For example, its definition of AI systems and its concept of AI Lifecycle and AI Actors and (organizations and individuals that play an active role in the AI Lifecycle) are partially adapted from the OECD Recommendation on AI.

Draft EU AI Act: The EU AI Act, once enacted and in force, will provide specific legal requirements for covered organizations using proscribed AI systems, while the AI RMF is voluntary. However, the EU AI Act also takes a risk-based approach and considers similar categories of risk, as outlined in a Crosswalk that accompanies NIST’s AI RMF. Additionally, European lawmakers are reportedly considering aligning the Draft AI Act’s approaches and definitions with those of the AI RMF.

International standards published by standards development organizations (such as those published under ISO/IEC JTC 1/SC 42 Artificial Intelligence): Standards development organizations publish standards of various kinds, including of general applicability, of specific applicability to specific sectors, at the process level, and at the product level. As discussed above, the AI RMF does not provide a standard, or specific guidance, related to particular use-cases, organization sizes, or sectors, though NIST intends to eventually share “implementation profiles” for specific industries and workflows. Additionally, in its role as the federal standards coordinator, NIST works with organizations in the public, private, and non-profit sectors to monitor standards development activities and gaps and to track relevant AI standards. NIST also encourages the incorporation of the AI RMF into international standards and makes efforts to align the AI RMF with applicable international standards.

Responsible AI Certification Program: RAI Institute’s Certification Program aligns with and incorporates the concepts and approach of the AI RMF, as it has looked to globally-adopted principles from international organizations, current and proposed legal requirements, and emerging norms to inform the development of the conformity assessment scheme which is at the heart of the certification program. Whereas the AI RMF is intended to be law, regulation, use-case, domain, and technology agnostic, RAI Institute Certification Program is a measurable assessment tailored to specific use-cases. Similar to SOC2 or LEED, its end result is a certification stamp on a product demonstrating success with an independent audit. RAI Institute highly encourages organizations to use voluntary tools like the AI RMF and resources that RAI Institute has developed to help prepare for product certification and future regulatory compliance. Since the AI RMF is a “living document” that will evolve, and its associated is expected to be updated twice yearly, RAI Institute will continue to work closely with NIST to understand any changes to the AI RMF and incorporate learnings into RAI Institute’s work, as part of the effort to ensure that the latest knowledge on the oversight of AI is reflected in RAI Institute’s work.

NIST’s AI RMF represents a major step forward in the global AI risk management conversation, both building upon and contributing to trustworthy AI concepts and approaches. NIST and the AI community, including the Responsible AI Institute are now developing AI RMF resources and implementation profiles that use the AI RMF alongside applicable laws, standards, certifications, and best practices to demonstrate trustworthy AI implementation at the use-case level.

Share the Post:

Related Posts

Responsible AI Institute Q1 2024 New Members

Multiple Fortune 500 Companies Join RAI Institute; New Member Resources Spotlight Responsible AI Journeys and AI Industry Experts  AUSTIN, TEXAS – Apr. 14, 2024 –...

Healthcare AI

As generative AI charges ahead, it presents challenges and opportunities across sectors. Its consequences are especially pronounced in healthcare, where patient wellbeing is at risk....

Responsible AI Institute - Employment Working Group

An update from the RAI Employment working group on context-specific assessment development AI tools are widely used in corporate environments today to support recruitment and...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!