HCAI Two-Dimensional Framework

Overview

The Human-Centered AI framework replaces the traditional one-dimensional “levels of automation” with a two-dimensional model. The X-axis represents level of automation (low to high). The Y-axis represents level of human control (low to high). This decoupling reveals design possibilities that one-dimensional thinking makes invisible.

The Four Quadrants

Upper Right: Reliable, Safe & Trustworthy (RST) High automation AND high human control. This is the target for most consequential AI systems. Examples: modern cameras (auto-exposure with creative control), elevators (automated movement with user destination control), surgical robots (automated precision with surgeon guidance).

Lower Right: Computer Control High automation, low human control. Appropriate for rapid-action scenarios where there’s no time for human intervention. Examples: airbag deployment, anti-lock brakes, pacemakers, defensive weapons systems. Requires extremely careful design and extensive testing because humans can’t intervene.

Upper Left: Human Mastery High human control, low automation. Appropriate when the goal is skill-building, creative exploration, or intrinsic satisfaction. Examples: bicycle riding, piano playing, baking, playing with children. Automation here would diminish the experience.

Lower Left: Simple/Dangerous Low automation, low human control. Home of simple devices (clocks, mousetraps) and dangerous ones (land mines). Limited design relevance for AI systems.

The Danger Zones

Excessive Automation (far right edge): Automation pushed beyond what the system can reliably handle. Boeing 737 MAX’s MCAS exemplifies this, designers believed their autonomous system couldn’t fail, so they didn’t document it or train pilots on override.

Excessive Human Control (far top edge): Humans given control without adequate guardrails. “Human error” accidents are often design failures, the system allowed catastrophic mistakes that should have been constrained.

When to Use This Framework

Use it when designing AI systems to ask: Where should this system live? Most product discussions default to the one-dimensional tradeoff (“how much automation vs. control?”). This framework asks a better question: “How do we achieve high automation AND high human control?”

Limitations

The framework doesn’t specify how to achieve high-high designs, that requires the Prometheus Principles and domain-specific knowledge. It also doesn’t capture the temporal dimension: systems may need to move between quadrants as contexts change or as users develop expertise.

Related: 07-atom—automation-control-false-tradeoff, 01-molecule—prometheus-principles, 07-molecule—ui-as-ultimate-guardrail