Prohibited AI Practices (EU Classification)
The EU AI Act identifies eight categories of AI use that are outright prohibited, the regulatory equivalent of “there is no acceptable version of this”:
- Subliminal manipulation: AI deploying techniques beyond conscious awareness to distort behavior
- Exploitation of vulnerabilities: Targeting age, disability, or economic circumstances to distort behavior
- Social scoring: Evaluating individuals based on social behavior leading to unjustified detrimental treatment
- Predictive policing of individuals: Risk assessment based solely on profiling or personality traits
- Facial recognition scraping: Untargeted collection of facial images from internet or CCTV
- Emotion recognition in workplace/education: Inferring emotions in these contexts (with narrow exceptions)
- Biometric categorization for sensitive attributes: Deducing race, politics, religion, sexuality from biometrics
- Real-time remote biometric identification: In public spaces for law enforcement (with narrow exceptions)
What unifies these categories: they either eliminate meaningful consent, target human vulnerabilities, or enable surveillance at scale. The prohibition framework reveals the EU’s bright lines for human dignity in AI contexts.
Related: 05-molecule—risk-based-ai-classification, 01-atom—fundamental-rights-impact-assessment