Human-AI Decision Making

Core Contribution

Empirical research on how humans make decisions with AI assistance. Examines conditions for appropriate reliance and complementary performance.

Key Findings

Complementarity Challenge: Human-AI teams often underperform best individual performer Reliance Calibration: Users struggle to know when to trust AI Explanation Effects: Explanations can increase trust without increasing accuracy

Design Implications

  • Uncertainty communication matters more than explanation detail
  • Training on AI limitations improves calibration
  • Interface design shapes reliance patterns

Appropriate Reliance

The goal: users trust AI when it’s right, override when it’s wrong. Achieving this requires:

  • Accurate confidence communication
  • User understanding of AI limitations
  • Low-cost override mechanisms

Related: 01-molecule—appropriate-reliance-framework, 05-molecule—dynamic-trust-calibration