Automation Bias in Regulatory Text
The EU AI Act explicitly names automation bias, the tendency to over-rely on automated output, as a design problem that providers must address.
Article 14(4)(b) requires high-risk AI systems be designed so that human overseers “remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system.”
This is significant. A cognitive phenomenon documented in human factors research has been codified into law as something designers must actively counteract. The regulation doesn’t merely warn about automation bias, it mandates that systems be designed to mitigate it.
The implication: interface design is now a regulatory compliance domain, not just a usability concern.
Related: 01-molecule—human-oversight-as-design-requirement, 05-atom—uniform-confidence-problem