Trust as a Function of Three Factors

Trust in AI systems can be modeled as:

T = α₁E + α₂P + α₃(1 − U)

Where:

  • E = Explainability (can the system justify its outputs?)
  • P = Performance history (has the system been reliable?)
  • U = Uncertainty (how confident is the system?)
  • α₁ + α₂ + α₃ = 1 (weights sum to 1)

Greater transparency, reliable past performance, and low uncertainty contribute to higher trust. This suggests three distinct intervention points for trust calibration in human-AI interfaces:

  1. Improve explainability mechanisms
  2. Surface performance track records
  3. Make uncertainty visible

Related: 05-molecule—dynamic-trust-calibration