Over-reliance vs. Under-reliance
Two failure modes in human-AI collaboration, both reducing decision quality through opposite mechanisms.
Over-reliance: Adopting AI advice when you shouldn’t. Measured as the percentage of times a user switched to incorrect AI advice among all switches. The failure: accepting bad recommendations.
Under-reliance: Ignoring AI advice when you should follow it. Measured as the percentage of times a user rejected correct AI advice among all correct AI recommendations. The failure: dismissing good recommendations.
Both harm AI-assisted decision efficacy, but require different interventions:
- Over-reliance suggests users trust AI too much or engage insufficiently with their own judgment
- Under-reliance suggests users trust AI too little or overweight their own expertise
Ideal collaboration involves neither, following AI when it’s right, overriding when it’s wrong. This requires users to have calibrated models of both AI capability and their own capability relative to the task.
Related: 01-atom—trust-reliance-distinction, 01-molecule—appropriate-reliance-framework