Appropriate Reliance Framework
Overview
Appropriate reliance is the target state for human-AI collaboration: users follow AI advice when it’s correct and override it when it’s wrong. This framework defines the components needed to achieve it and the failure modes that prevent it.
Components
1. User’s Model of AI Capability
Users need accurate beliefs about:
- AI accuracy in the domain
- AI confidence reliability (calibration)
- Conditions where AI performs well or poorly
2. User’s Model of Own Capability
Users need calibrated self-assessment:
- Personal accuracy without AI
- Relative expertise compared to AI
- Conditions where personal judgment is stronger
3. Per-Instance Assessment
For each decision, users must evaluate:
- AI’s confidence on this specific case
- Personal confidence on this specific case
- Which source is more likely correct
4. Behavioral Execution
Users must actually act on their assessment:
- Accept AI advice when AI seems more reliable
- Override AI advice when personal judgment seems stronger
- Resist both blind acceptance and reflexive rejection
Failure Modes
| Failure | Cause | Symptom |
|---|---|---|
| Over-reliance | Inflated AI model, deflated self-model, or low engagement | Accepting incorrect AI advice |
| Under-reliance | Deflated AI model, inflated self-model, or automation aversion | Rejecting correct AI advice |
| Uncalibrated trust | Inability to detect AI uncertainty characteristics | Uniform reliance regardless of AI confidence |
| Inverse response | Misinterpreting uncertainty signals | Increasing reliance when should decrease (or vice versa) |
When to Use This Framework
- Designing human-AI decision support systems
- Evaluating AI assistant interfaces
- Training users to work with AI tools
- Diagnosing collaboration failures
Limitations
The framework assumes users have cognitive capacity and motivation to engage with these assessments. High time pressure, fatigue, or low stakes may prevent thoughtful evaluation regardless of interface design.
Related: 01-molecule—calibration-transparency-principle, 01-molecule—human-ai-configuration