Distributed Responsibility Across AI Actors

Context

AI systems pass through multiple phases and touch many stakeholders before reaching end users. The interdependencies between activities and actors can make it difficult to reliably anticipate impacts.

Problem

Early decisions in design can alter system behavior in deployment. Dynamics of deployment context can reshape impacts. The best intentions in one phase can be undermined by decisions in other phases.

Organizations often lack visibility or control across the full lifecycle. Developers may not know how systems will be used. Deployers may not understand how systems were trained.

Solution Pattern

The NIST AI RMF identifies distinct AI actor categories across lifecycle phases:

AI Design: Plan, design, collect and process data. Creates system concept, objectives, and training data.

AI Development: Build models, select algorithms, train and test. Creates the technical artifact.

AI Deployment: Pilot, integrate, ensure compliance, manage change. Bridges development to operational use.

Operation and Monitoring: Operate systems, assess outputs, track impacts. Sustains systems in production.

TEVV: Test, evaluate, verify, validate throughout. Independent verification across phases.

Each actor category has distinct risk perspectives and responsibilities. Effective risk management requires coordination mechanisms that cross actor boundaries.

Consequences

Risk metrics used by developers may not align with deployers. Organizations acquiring third-party systems inherit risks they may not understand. Responsibility gaps emerge at handoff points.

The framework emphasizes that all parties share responsibility for trustworthiness “regardless of their role in the lifecycle.”

Related: 05-molecule—govern-map-measure-manage-framework, 05-atom—tevv-throughout-lifecycle