Asymmetric Miscalibration Failures

Overconfident and underconfident AI systems fail in different ways, both reducing decision quality.

Overconfident AI (stated confidence > actual accuracy):

  • Users over-rely on incorrect AI advice
  • Higher “switch to AI” rates even when AI is wrong
  • Failure mode: adopting bad recommendations

Underconfident AI (stated confidence < actual accuracy):

  • Users under-rely on correct AI advice
  • Lower acceptance of accurate AI recommendations
  • Failure mode: ignoring good recommendations

Both directions produce roughly equivalent drops in decision efficacy, but through opposite mechanisms. A system might be “equally wrong” in its calibration in either direction, yet the interventions needed to fix the resulting human behavior are completely different.

This asymmetry complicates design: a single approach to “improving calibration display” won’t address both failure modes.

Related: 05-atom—calibration-detection-gap