Shneiderman 2020: Human-Centered AI
Full Title: Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy
Citation: Shneiderman, Ben. “Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy.” arXiv preprint arXiv:2002.04087 (February 23, 2020).
Source: https://arxiv.org/abs/2002.04087v1
Core Argument
The traditional one-dimensional “levels of automation” framework (Sheridan-Verplank, 1978) assumes a tradeoff between automation and human control. Shneiderman argues this is a false constraint. By decoupling automation from control into two independent dimensions, designers can target high automation AND high human control simultaneously, the upper-right quadrant where Reliable, Safe & Trustworthy (RST) systems live.
Key Contributions
- Two-dimensional HCAI framework replacing the one-dimensional levels of automation
- RST framework (Reliable, Safe, Trustworthy) as evaluation criteria
- Four quadrant analysis identifying appropriate design targets for different contexts
- Prometheus Principles for designing HCAI interfaces
- Danger zone identification for excessive automation and excessive human control
Extracted Content
- 07-atom—automation-control-false-tradeoff
- 05-atom—algorithmic-hubris
- 01-atom—imperceptible-ai-problem
- 05-atom—rst-framework
- 07-molecule—hcai-two-dimensional-framework
- 01-molecule—prometheus-principles
Why This Source Matters
This paper provides theoretical grounding for the “UI as Ultimate Guardrail” principle. It explains why interface design determines AI system trustworthiness, because well-designed interfaces can maintain human control even as automation increases. The false tradeoff framing is particularly useful for pushing back on AI product decisions that sacrifice user control in the name of “seamless” automation.