What the EU AI Act Requires of Designers
Practical Implications for AI System Design
The EU AI Act creates legal requirements for AI systems sold or used in Europe. For designers and developers, this means concrete changes to how AI systems are built, documented, and deployed.
This isn’t a comprehensive legal analysis - consult lawyers for that. It’s a practical overview of what the requirements mean for design practice.
Risk-Based Classification
The Act classifies AI systems by risk level:
Unacceptable risk: Banned. Social scoring, real-time biometric surveillance in public spaces (with exceptions), manipulation of vulnerable groups.
High risk: Heavily regulated. HR/recruiting systems, credit scoring, law enforcement applications, critical infrastructure, education/vocational training assessment.
Limited risk: Transparency requirements. Chatbots must disclose they’re AI. Deepfakes must be labeled.
Minimal risk: No specific requirements. Most AI systems fall here.
The design question: What risk category does your system fall into? High-risk classification triggers extensive requirements.
High-Risk System Requirements
If your system is high-risk, design must address:
Risk management system. Documented process for identifying, analyzing, and mitigating risks throughout the lifecycle.
Data governance. Training data must be relevant, representative, and free from errors. Data must be documented.
Technical documentation. Detailed documentation of design, development, and capabilities - sufficient for assessment.
Record keeping. Automatic logging of system operation for traceability.
Transparency. Users must receive information about system capabilities and limitations.
Human oversight. Design must enable effective human oversight of operation.
Accuracy and robustness. Appropriate levels throughout lifecycle, resilience against attempts to alter behavior.
Design Implications
Documentation becomes deliverable. Design decisions must be documented not just for internal use but for regulatory compliance. This affects how design work is structured.
Provenance matters. Training data sources, preprocessing steps, and model lineage need to be traceable. Design for provenance from the start.
Human oversight is architectural. “Human in the loop” isn’t a checkbox - systems must be designed to enable meaningful human oversight.
Testing requirements expand. Beyond functional testing, high-risk systems need testing for bias, robustness, and accuracy across relevant conditions.
Post-deployment monitoring. Design must include monitoring capabilities. Compliance continues after deployment.
For Non-High-Risk Systems
Even systems not classified as high-risk benefit from:
Transparency by default. When AI is involved, users should know.
Documentation practices. Good documentation supports compliance if classification changes and enables better internal decisions regardless.
Risk awareness. Understanding the framework helps identify when systems might cross into high-risk territory.
Timeline and Applicability
The Act entered into force in August 2024, with requirements phasing in:
- Bans on unacceptable risk: 6 months
- High-risk obligations: 36 months (most provisions)
- Some requirements: 24 months
For new systems being designed now: design for compliance from the start. Retrofitting is expensive.
The Bigger Picture
The EU AI Act represents a shift toward AI regulation globally. Other jurisdictions are watching and developing their own frameworks.
Design practices that meet EU requirements position organizations well for emerging regulation elsewhere. Compliance isn’t just about Europe - it’s about building sustainable AI practices.
Does your AI system classification match what the EU AI Act would assign? What would need to change if high-risk requirements applied?