Imperceptible AI Is Not Ethical AI

“Imperceptible AI is not ethical AI.”

  • IBM AI Design Guidelines

When automation operates invisibly, users cannot calibrate their trust, cannot anticipate system behavior, and cannot intervene when needed. The Boeing 737 MAX’s MCAS system exemplified this failure, its existence wasn’t documented in the user manual, and pilots weren’t trained on manual override, because designers believed it couldn’t fail.

Invisible automation removes the possibility of informed consent. Users can’t choose whether to rely on something they don’t know exists. They can’t develop appropriate mental models. They can’t recognize when the system is operating outside its competence.

This doesn’t mean every automation must be visible at all times, but users need to be able to discover what’s happening and why when it matters.

Related: 07-molecule—ui-as-ultimate-guardrail, 05-atom—excessive-automation-danger, 05-atom—uniform-confidence-problem