The Transparency-Reliance Paradox

Telling users about AI miscalibration helps them detect it, but doesn’t improve their decisions.

When users are explicitly told their AI collaborator is overconfident or underconfident:

  • They correctly identify the miscalibration type
  • They trust the uncalibrated AI less
  • They under-rely on it, even the underconfident AI that they should use more

The paradox: transparency about overconfidence reduces over-reliance (good), but transparency about underconfidence increases under-reliance (bad). Awareness creates blanket skepticism rather than calibrated adjustment.

Knowing something is wrong doesn’t mean you know how to respond appropriately. Users told about underconfidence logically thought “I should trust this less,” but the correct response was actually to trust it more (since it was underselling its accuracy).

This suggests that transparent labeling alone is insufficient. Users need not just awareness of the problem but guidance on the appropriate behavioral response.

Related: 05-atom—calibration-detection-gap, 01-molecule—appropriate-reliance-framework