Transparency vs. Explainability vs. Interpretability
Three related but distinct characteristics that support each other in AI systems:
Transparency answers “What happened?” The extent to which information about an AI system and its outputs is available to individuals interacting with it. Spans from design decisions and training data through deployment and use decisions.
Explainability answers “How was a decision made?” A representation of the mechanisms underlying the AI system’s operation. Enables debugging, monitoring, and more thorough documentation, audit, and governance.
Interpretability answers “Why was this decision made, and what does it mean?” The meaning of AI system output in the context of its designed functional purpose. Helps users contextualize outputs and understand their implications.
A transparent system is not necessarily accurate, secure, or fair. But it’s difficult to determine whether an opaque system possesses such characteristics, and to verify this over time as systems evolve.
Related: 05-atom—trustworthy-ai-characteristics, 05-atom—uniform-confidence-problem