NIST AI Risk Management Framework (AI RMF 1.0)
Citation
National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0) (NIST AI 100-1). U.S. Department of Commerce. https://doi.org/10.6028/NIST.AI.100-1
Summary
Voluntary framework developed through consensus process for managing AI risks across organizations. Positions risk management as the primary mechanism for achieving trustworthy AI. Structured around four core functions (Govern, Map, Measure, Manage) and seven trustworthiness characteristics.
Key Contributions
- Defines trustworthy AI through seven interrelated characteristics requiring tradeoff management
- Establishes Govern-Map-Measure-Manage (GMMM) framework with governance as cross-cutting function
- Articulates AI-specific risks distinct from traditional software risks
- Provides lifecycle model with distributed responsibility across AI actors
- Emphasizes context-dependency of risk tolerance
Companion Resources
- AI RMF Playbook: https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook
- Generative AI Profile (2024): NIST-AI-600-1
- Trustworthy and Responsible AI Resource Center: https://airc.nist.gov/
Extraction Notes
Strong framing insight: trustworthiness positioned as requiring active tradeoff management, not checkbox compliance. The “Valid & Reliable” characteristic explicitly positioned as foundational, other characteristics build on top. Framework deliberately avoids prescribing risk tolerance, recognizing contextual variation.
Related: 05-molecule—govern-map-measure-manage-framework, 05-atom—trustworthy-ai-characteristics