Quantified Gains from Graduated Human-AI Collaboration
In the CyberAlly deployment (an LLM-based assistant operating at Level 2 autonomy with human-on-the-loop oversight), measured improvements included:
- 50% reduction in false positives reaching human analysts
- 67% faster investigations (from 3 hours to 1 hour)
- 80% reduction in mean time to respond
- Automated ticketing increased from 10% to 75% of incidents
The key design choice: CyberAlly didn’t start with these autonomy levels. Analysts initially verified everything. Trust was built through demonstrated accuracy, and autonomy was earned over time.
The pattern suggests that the deployment strategy matters as much as the technical capability. Graduated autonomy, starting conservative and expanding based on evidence, produces both better outcomes and better human acceptance.
Related: 05-molecule—dynamic-trust-calibration, 05-atom—five-autonomy-levels, 05-atom—alert-fatigue-statistics