The Oversight Scalability Problem
What happens when AI systems operate at speeds and scales that exceed human oversight capacity?
The EU AI Act mandates effective human oversight for high-risk systems. But it doesn’t resolve the fundamental tension: meaningful oversight requires cognitive engagement, and cognitive engagement doesn’t scale with algorithmic processing speed.
An AI system making thousands of consequential decisions per hour cannot be individually overseen by humans. Sampling-based oversight introduces gaps. Purely post-hoc review means harm has already occurred.
Current approaches:
- Statistical monitoring for anomalies (but anomalies may be too granular to detect)
- Thresholds that trigger human review (but threshold-setting is itself a judgment call)
- Periodic audits (but audits catch patterns, not individual decisions)
The question persists: Is the human oversight mandate compatible with systems that operate beyond human cognitive bandwidth? Or does compliance require design constraints that limit system throughput to human-reviewable rates?
Related: 01-molecule—human-oversight-as-design-requirement, 05-atom—automation-bias-regulatory-recognition, 05-atom—serious-incident-definition