Trust vs. Reliance

Trust is an attitude. Reliance is a behavior. They’re related but not the same.

Trust refers to a user’s belief about an AI system’s reliability, competence, or intentions, measured through self-report scales like “I believe this AI provides accurate predictions.”

Reliance refers to whether users actually adopt AI recommendations, measured through behavioral metrics like “switch percentage” (how often users change their answer to match AI advice).

The distinction matters because interventions affect them differently:

  • Miscalibrated confidence didn’t change stated trust (users couldn’t perceive the problem) but dramatically changed reliance behavior
  • Transparency about miscalibration reduced trust but created problematic under-reliance in both directions

You can have high trust with low reliance (believing AI is good but ignoring it) or low trust with high reliance (distrusting AI but following it anyway). Designing for trust alone won’t guarantee appropriate behavior.

Related: 01-molecule—appropriate-reliance-framework, 05-molecule—dynamic-trust-calibration