Residual Risk
Risk remaining after risk treatment has been applied. Defined as the sum of all unmitigated risks.
In AI systems, residual risk directly impacts end users and affected communities. Documenting residual risks serves two functions:
- Provider accountability: Forces system providers to fully consider deployment risks and what remains unaddressed
- User awareness: Informs end users about potential negative impacts of interacting with the system
The framework requires documenting negative residual risks to both downstream acquirers of AI systems (organizations deploying purchased systems) and end users.
Residual risk is distinct from accepted risk (risk knowingly retained) and transferred risk (risk shifted to another party, such as through insurance or contractual terms).
Effective residual risk documentation requires honesty about what the system cannot do, not just what it can.
Related: 05-atom—ai-risk-definition, 05-atom—trustworthy-ai-characteristics