The Trust Calibration Problem
Effective human-AI collaboration requires calibrated trust, neither too much nor too little.
Over-trust leads to automation bias: humans defer to AI outputs even when wrong, accept recommendations without critical evaluation, and fail to catch errors they would normally catch working alone.
Under-trust leads to underutilization: humans ignore valid AI recommendations, duplicate work the AI has already done correctly, and fail to benefit from AI capabilities.
The challenge is that AI systems give users few signals to calibrate their trust appropriately. Outputs arrive with uniform confidence regardless of actual reliability. Users must develop their own mental models of when to trust and when to verify, but receive little help doing so.
System transparency is necessary but not sufficient. Users also need feedback loops that help them learn where the AI succeeds and fails in their specific context.
Related: 05-atom—uniform-confidence-problem, 04-atom—provenance-design, 05-atom—automation-bias-regulatory-recognition