The AI RMF is a voluntary framework to help organizations design, develop, deploy, and use AI responsibly. It aims to manage risks while fostering trustworthy AI. Sector-neutral and adaptable, it does not prescribe specific rules or thresholds, rather it complements existing enterprise risk management practices. It is meant to be a living document, with updates over time (a full review with community input expected by 2028).
Risk is defined as the combination of likelihood (probability) and magnitude (impact) of negative outcomes.
AI systems pose unique risks (versus traditional software) because of dynamics such as evolving data distributions, complexity, opacity, socio-technical interactions, and emergent behavior.
Challenges in AI risk management include difficulties in measurement, defining risk tolerance, prioritizing risks (not all risks are eliminable), and integrating AI risk into enterprise risk management.
The framework is addressed to “AI actors”: the individuals, teams, or organizations that play roles across the AI lifecycle (designers, developers, deployers, validators, users)
It also emphasizes inclusion of diverse perspectives, including end users, impacted communities, civil society, and researchers, to help surface risks and tradeoffs.
The document outlines a set of trustworthiness characteristics for AI systems. These are not universally required in equal measure; tradeoffs are expected. Key characteristics include:
Valid and Reliable – correctness, robustness, generalization, consistency over time
Safe – avoid causing harm under defined conditions
Secure and Resilient – resistance to adversarial attack, data poisoning; ability to recover gracefully
Accountable & Transparent – clear assignment of responsibility; transparency of design, data, decision pathways
Explainable & Interpretable – making system behavior understandable to relevant stakeholders
Privacy-Enhanced – protecting individual autonomy, confidentiality, limiting unintended inference
Fair (with Harmful Bias Managed) – reducing unfair or discriminatory outcomes; recognizing multiple types of bias (systemic, computational, human cognitive)
The interplay and tradeoffs among these characteristics are central: e.g. maximizing accuracy may reduce interpretability; strong privacy measures may degrade performance; fairness constraints may affect other objectives.
AI is socio-technical: risks emerge not only from code and data, but from context, use, and interaction with people.
Trustworthiness is key: AI systems should be valid/reliable, safe, secure/resilient, accountable/transparent, explainable/interpretable, privacy-protective, and fair (with bias managed).
Risk tradeoffs are inevitable: optimizing one trustworthiness property may reduce another; decisions must balance priorities.
The heart of the framework is the AI RMF Core, which is organized into four high-level functions. These functions are meant to be iterative, overlapping, and adaptable to different stages of the AI lifecycle.
Establish structures, policies, roles, and culture for AI risk management across the organization:
Establish policies, processes, and culture for AI risk management.
Integrate risk and trustworthiness principles into organizational strategy.
Define accountability, roles, oversight, and processes for lifecycle management (including decommissioning).
Understand AI system context: goals, stakeholders, potential risks, and intended uses:
Identify and characterize the context, objectives, stakeholders, uses, and potential risks of the AI system.
Map out how AI interacts with data, models, users, and the environment.
Understand potential harms, threat models, dependencies, and system constraints.
Use metrics, testing, and monitoring to evaluate trustworthiness and risk over time:
Select, define, and collect metrics and indicators to monitor trustworthiness and risk over time.
Perform testing, evaluation, verification, validation (TEVV) and monitor changes during deployment.
Compare measured performance and behavior against expectations, thresholds, and risk tolerance.
Take actions to mitigate, monitor, adapt, or retire AI systems as risks evolve:
Respond to identified risks via mitigation, control, escalation, or compensation.
Adapt or retrain systems, intervene or shut down when necessary, refine deployment, and monitor residual risks.
Maintain feedback loops and continuous improvement.
The framework also introduces Profiles, which are custom selections of subsets of the Core functions, categories, and subcategories tailored to particular organizational needs, sectors, or use cases.
AI RMF Functions are:
Govern – Establish structures, policies, roles, and culture for AI risk management across the organization.
Map – Understand AI system context: goals, stakeholders, potential risks, and intended uses.
Measure – Use metrics, testing, and monitoring to evaluate trustworthiness and risk over time.
Manage – Take actions to mitigate, monitor, adapt, or retire AI systems as risks evolve.
The authors recognize that measuring the effectiveness of the AI RMF (i.e. whether it actually leads to more trustworthy, lower-risk systems) is a challenging task and is part of future work.
They encourage organizations to self-evaluate periodically: Did the RMF improve risk practices, clarity of decision making, accountability, cultural awareness, TEVV capacity, ongoing monitoring, etc.?
The document is meant to evolve. NIST anticipates community feedback, alignment with international standards, and new companion tools (e.g. a Playbook) to support adoption.
[GenAI] Assisted