Risk-Based AI Classification Framework
The Framework
The EU AI Act establishes a four-tier classification system that determines what obligations apply to an AI system:
| Risk Level | Regulatory Response | Examples |
|---|---|---|
| Unacceptable | Prohibited | Social scoring, subliminal manipulation, predictive policing |
| High | Extensive requirements | Medical devices, employment screening, credit scoring |
| Limited | Transparency obligations | Chatbots, deepfakes, emotion recognition |
| Minimal | No obligations | Spam filters, video game AI |
Why It Matters
This framework operationalizes a key principle: regulation should be proportional to risk. Not all AI requires the same governance. The framework allows innovation in low-risk applications while concentrating oversight resources on systems that can cause significant harm.
The classification also determines who is regulated and how. High-risk providers face pre-market conformity assessment. Low-risk providers face nothing. The regulatory burden scales with potential impact.
How It Works
Unacceptable Risk (Prohibited): Eight categories where no deployment is acceptable. Identified by the nature of the use, not the technology.
High Risk: Two pathways:
- AI embedded in products already covered by EU product safety legislation (medical devices, vehicles, toys)
- AI used in specific high-stakes domains listed in Annex III (biometrics, critical infrastructure, employment, education, law enforcement, border control, justice administration)
Limited Risk: Systems interacting with humans or generating synthetic content. Must disclose AI involvement.
Minimal Risk: Everything else. No specific obligations under the Act.
Limitations
The framework assumes risks can be pre-identified by use case. It may struggle with:
- General-purpose systems applied in unforeseen ways
- Emergent risks from capability advances
- Context-dependent harm that doesn’t map cleanly to categories
The GPAI provisions (Chapter 5) partially address the first concern, but tensions remain between static classification and dynamic capability.
Related: 05-atom—provider-deployer-distinction, 01-atom—fundamental-rights-impact-assessment