Value Chain Accountability for AI
The Framework
The EU AI Act distributes responsibility across the entire AI value chain rather than concentrating it on a single entity. Different actors bear different obligations based on their role:
| Actor | Primary Obligations |
|---|---|
| Provider | Design, development, conformity assessment, documentation, post-market monitoring |
| Deployer | Appropriate use, human oversight, context-specific risk assessment, incident reporting |
| Importer | Verification of provider compliance before EU market entry |
| Distributor | Due diligence on compliance, proper storage/transport |
| Authorized Rep | Interface with authorities on behalf of non-EU providers |
Why It Matters
AI systems don’t exist in isolation. A model developed in one context may be deployed in another by different organizations. Concentrating all responsibility on providers would:
- Ignore deployment context (which providers can’t fully anticipate)
- Let deployers off the hook for misuse
- Create gaps where systems cross organizational boundaries
The value chain model acknowledges that harm prevention requires action at multiple stages, not just at the point of creation.
Key Mechanisms
Substantial Modification Trigger: When a deployer modifies a system beyond what the provider documented, they become a provider for compliance purposes. This prevents “deploy and forget” modifications.
Downstream Provider Concept: When one provider integrates another’s AI model, both bear obligations appropriate to their role. The integrator must understand what they’re incorporating.
Information Flow Requirements: Providers must give deployers the information needed to fulfill their obligations. Deployers must give overseers the authority and support needed to intervene.
Implications for Practice
Organizations acquiring AI systems cannot simply point to the vendor for compliance. Deployers must:
- Conduct their own risk assessments for their specific use context
- Implement human oversight appropriate to their operations
- Report serious incidents that occur during their deployment
- Maintain records of their use and any modifications
This creates demand for AI governance capabilities at every level, not just at AI development companies.
Limitations
The framework may strain at:
- Open source ecosystems with diffuse contribution
- APIs where usage is difficult to monitor
- Rapidly evolving systems where roles blur
The regulation attempts to address some of these (e.g., open source exemptions for non-commercial use), but edge cases remain.
Related: 05-atom—provider-deployer-distinction, 05-molecule—risk-based-ai-classification