Shneiderman 2020: Human-Centered AI

Full Title: Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy

Citation: Shneiderman, Ben. “Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy.” arXiv preprint arXiv:2002.04087 (February 23, 2020).

Source: https://arxiv.org/abs/2002.04087v1

Core Argument

The traditional one-dimensional “levels of automation” framework (Sheridan-Verplank, 1978) assumes a tradeoff between automation and human control. Shneiderman argues this is a false constraint. By decoupling automation from control into two independent dimensions, designers can target high automation AND high human control simultaneously, the upper-right quadrant where Reliable, Safe & Trustworthy (RST) systems live.

Key Contributions

  1. Two-dimensional HCAI framework replacing the one-dimensional levels of automation
  2. RST framework (Reliable, Safe, Trustworthy) as evaluation criteria
  3. Four quadrant analysis identifying appropriate design targets for different contexts
  4. Prometheus Principles for designing HCAI interfaces
  5. Danger zone identification for excessive automation and excessive human control

Extracted Content

Why This Source Matters

This paper provides theoretical grounding for the “UI as Ultimate Guardrail” principle. It explains why interface design determines AI system trustworthiness, because well-designed interfaces can maintain human control even as automation increases. The false tradeoff framing is particularly useful for pushing back on AI product decisions that sacrifice user control in the name of “seamless” automation.