Right to Explanation (AI Decisions)
Under Article 86 of the EU AI Act, any person subject to a decision made using a high-risk AI system has the right to obtain “clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.”
This right applies when the decision produces legal effects or significantly affects health, safety, or fundamental rights in ways the person considers adverse.
The explanation must cover both the AI system’s role and the decision’s main elements, not just what the system did, but how it factored into the ultimate human judgment.
This creates an information architecture requirement: systems must be designed to produce explainable outputs, and organizations must have processes to translate those outputs into clear human language.
Related: 02-molecule—ai-transparency-obligations-framework, 01-molecule—human-oversight-as-design-requirement, 04-atom—provenance-design