The Excessive Automation Danger Zone

Pushing automation beyond what a system can reliably handle creates a specific failure mode: users can’t intervene because they don’t know they need to, don’t know how to, or don’t have time to.

The Boeing 737 MAX disasters exemplify this pattern. The MCAS system was designed to autonomously correct a flight characteristic that pilots weren’t supposed to encounter. Because designers believed the system couldn’t fail, they didn’t document its existence in the manual and didn’t train pilots on manual override. When the single angle-of-attack sensor failed, pilots faced an invisible system fighting their control inputs with no understanding of what was happening or how to stop it.

The National Transportation Safety Board’s 2017 report on a Tesla crash made a similar point: automation “because we can” doesn’t necessarily improve the human-automation system. The Autopilot name suggested greater capability than was available, encouraging drivers to become less vigilant.

The pattern: excessive automation creates invisible dependencies on systems users can’t monitor, understand, or override.

Related: 05-atom—algorithmic-hubris, 01-atom—imperceptible-ai-problem, 07-atom—automation-control-false-tradeoff