AI Transparency Obligations Framework
The Framework
The EU AI Act establishes layered transparency requirements that vary by system type and risk level:
| Context | Transparency Requirement |
|---|---|
| Human interaction | Disclosure that interaction is with AI (unless obvious) |
| Emotion/category detection | Disclosure when system detects emotions or biometrics |
| Synthetic content | Disclosure that content is AI-generated or manipulated |
| High-risk systems | Detailed documentation for deployers on capabilities, limitations, risks |
| Individual decisions | Explanation rights for affected persons |
| GPAI models | Training data summaries, downstream provider documentation |
Why It Matters
Transparency is treated as a governance mechanism, not just a communication nicety. The Act frames transparency as enabling:
- Informed consent: People knowing when they’re interacting with AI
- Appropriate reliance: Deployers understanding system limitations
- Accountability: Authorities being able to audit compliance
- Contestability: Affected persons being able to challenge decisions
Information Architecture Requirements
For Providers:
- Technical documentation that captures system design, training, testing
- Clear instructions for use specifying intended purpose and misuse risks
- Transparency mechanisms that can’t be easily circumvented
For Deployers:
- Disclosure systems that inform users of AI involvement
- Logging capabilities for traceability
- Explanation processes for individual decisions
For Content:
- Labeling of AI-generated text, images, audio, video
- Machine-readable markers enabling automated detection
- Preservation of provenance through content lifecycles
The Deepfake Challenge
Article 50 creates specific obligations for synthetic media that resembles real persons, events, or places. This extends beyond simple AI disclosure to content authentication, a harder information architecture problem.
The regulation requires disclosing that content “would falsely appear to a person to be “authentic,” implying systems must also assess likely perception, not just technical origin.
Limitations
Transparency requirements don’t:
- Guarantee understanding (disclosure ≠ comprehension)
- Address attention limits (people may ignore disclosures)
- Solve the explainability research problem (some systems can’t be meaningfully explained)
The gap between what transparency obligations require and what human cognition can absorb remains significant.
Related: 01-molecule—human-oversight-as-design-requirement, 04-atom—provenance-design