Govern-Map-Measure-Manage Framework
Overview
The NIST AI RMF structures AI risk management around four core functions. The architecture itself encodes a key insight: governance isn’t one function among equals, it’s the substrate that enables the others.
The Four Functions
GOVERN (Cross-cutting) Cultivates risk management culture. Establishes policies, processes, accountability structures, and organizational practices. Defines roles, responsibilities, and incentive structures. Addresses third-party and supply chain considerations.
Governance is infused throughout the other functions, not a sequential step but a continuous presence.
MAP (Contextual) Establishes context for framing risks. Defines intended purposes, identifies AI actors, characterizes potential impacts across stakeholder groups. Produces the contextual knowledge that informs everything downstream.
Outcomes: sufficient context to make initial go/no-go deployment decisions.
MEASURE (Analytical) Employs quantitative, qualitative, or mixed methods to analyze, assess, benchmark, and monitor risks. Documents functionality and trustworthiness characteristics. Tracks metrics over time.
Includes Test, Evaluation, Verification, and Validation (TEVV) processes.
MANAGE (Operational) Allocates resources to mapped and measured risks. Develops response plans, recovery procedures, and communication protocols. Implements continuous monitoring and improvement.
Structural Insight
The sequential logic (after GOVERN is in place): MAP → MEASURE → MANAGE. You can’t measure what you haven’t mapped. You can’t manage what you haven’t measured.
But the framework emphasizes iteration. Functions cross-reference as needed. Context evolves. Measurements inform remapping. Management reveals gaps in measurement.
Limitations
The framework is deliberately non-prescriptive on risk tolerance. It provides structure for how to manage risk but not what level of risk is acceptable. This is both a feature (adaptability) and a gap (organizations left to define their own standards).
When to Use
Organizations establishing or maturing AI governance programs. Teams seeking structured approaches to AI risk assessment. Practitioners developing TEVV methodologies. Cross-functional alignment on AI system evaluation.
Related: 05-atom—trustworthy-ai-characteristics, 05-atom—tevv-throughout-lifecycle, 05-atom—ai-risk-measurement-challenges