Prohibited AI Practices (EU Classification)

The EU AI Act identifies eight categories of AI use that are outright prohibited, the regulatory equivalent of “there is no acceptable version of this”:

  1. Subliminal manipulation: AI deploying techniques beyond conscious awareness to distort behavior
  2. Exploitation of vulnerabilities: Targeting age, disability, or economic circumstances to distort behavior
  3. Social scoring: Evaluating individuals based on social behavior leading to unjustified detrimental treatment
  4. Predictive policing of individuals: Risk assessment based solely on profiling or personality traits
  5. Facial recognition scraping: Untargeted collection of facial images from internet or CCTV
  6. Emotion recognition in workplace/education: Inferring emotions in these contexts (with narrow exceptions)
  7. Biometric categorization for sensitive attributes: Deducing race, politics, religion, sexuality from biometrics
  8. Real-time remote biometric identification: In public spaces for law enforcement (with narrow exceptions)

What unifies these categories: they either eliminate meaningful consent, target human vulnerabilities, or enable surveillance at scale. The prohibition framework reveals the EU’s bright lines for human dignity in AI contexts.

Related: 05-molecule—risk-based-ai-classification, 01-atom—fundamental-rights-impact-assessment