Calibrated Trust vs High Trust

Achieving high trust in AI systems is not the goal. Achieving calibrated trust is.

Insufficient trust leads to algorithm aversion, users reject AI assistance even when it would improve outcomes. Excessive trust leads to overreliance, users perform worse than either the AI or the human would alone.

The research challenge isn’t convincing people to trust AI more. It’s helping people develop appropriate reliance: following AI recommendations when they’re likely correct, overriding them when they’re likely wrong.

This reframes the design problem. Rather than asking “how do we increase trust?” we should ask “how do we help users know when to trust?”

Related: 01-atom—algorithm-aversion-definition, 05-atom—overconfident-wrong-critical-case, 05-molecule—self-assessing-ai-pattern