Fundamental Rights Impact Assessment (FRIA)
Article 27 of the EU AI Act requires deployers of high-risk AI systems to conduct a fundamental rights impact assessment before putting systems into use.
The assessment must include:
- Description of the deployer’s processes involving the AI system
- Period and frequency of intended use
- Categories of persons and groups likely to be affected
- Specific risks of harm to affected categories
- Human oversight measures
- Measures to be taken if risks materialize
This creates a pre-deployment obligation distinct from provider-side conformity assessment. Deployers must analyze their specific context, who will be affected, what harms might occur, how they’ll respond.
The FRIA requirement applies specifically to:
- Bodies governed by public law
- Private entities providing public services
- Certain financial and insurance contexts (credit, life/health insurance, risk assessment)
This shifts impact assessment from provider responsibility to deployer responsibility, acknowledging that context-specific harms require context-specific analysis.
Related: 05-atom—provider-deployer-distinction, 07-molecule—value-chain-accountability-ai, 05-molecule—risk-based-ai-classification