NIST AI 600-1: AI Risk Management Framework - Generative AI Profile

Source Overview

A cross-sectoral profile and companion resource to the AI Risk Management Framework (AI RMF 1.0) specifically addressing Generative AI risks. Developed pursuant to Executive Order 14110 on Safe, Secure, and Trustworthy Artificial Intelligence.

Core Framing

The document positions GAI risks as qualitatively distinct from both traditional software risks and general AI risks. GAI doesn’t merely amplify existing concerns, it creates genuinely novel risk categories. The framework is voluntary, sector-agnostic, and designed to be adapted rather than rigidly followed.

Key Contributions

Risk Taxonomy: Defines 12 risk categories unique to or exacerbated by GAI:

  1. CBRN Information or Capabilities
  2. Confabulation
  3. Dangerous, Violent, or Hateful Content
  4. Data Privacy
  5. Environmental Impacts
  6. Harmful Bias and Homogenization
  7. Human-AI Configuration
  8. Information Integrity
  9. Information Security
  10. Intellectual Property
  11. Obscene, Degrading, and/or Abusive Content
  12. Value Chain and Component Integration

Risk Dimensions: Four axes for categorizing where and how risks manifest:

  • Stage of AI lifecycle (design, development, deployment, operation, decommission)
  • Scope (model/system, application, ecosystem)
  • Source (model-derived, input-derived, output-derived, human-derived)
  • Time scale (immediate vs. extended)

Terminological Contribution: Explicitly rejects “hallucination” as anthropomorphizing; adopts “confabulation” instead.

Structural Approach

Follows the Govern-Map-Measure-Manage framework from AI RMF 1.0, with specific suggested actions for each subcategory mapped to GAI risk categories.

Limitations Acknowledged

  • Focuses on empirically demonstrated risks; speculative future risks excluded
  • Measurement science for AI safety described as “immature”
  • Many risks difficult to scope given uncertainty about GAI capabilities

Extracted Content

Related: [None yet]