Three Categories of AI Bias
NIST identifies three distinct categories of bias that can affect AI systems. Each can occur in the absence of prejudice, partiality, or discriminatory intent:
Systemic Bias Present in AI datasets, organizational norms and practices across the AI lifecycle, and broader society. Reflects historical inequities embedded in institutions and data collection processes.
Computational and Statistical Bias Present in datasets and algorithmic processes. Often stems from systematic errors due to non-representative samples, measurement choices, or model architecture decisions.
Human-Cognitive Bias Relates to how individuals or groups perceive AI system information, make decisions, or fill in missing information. Omnipresent in decision-making processes across the AI lifecycle, from design through deployment and maintenance.
Key insight: bias management cannot focus solely on data or algorithms. The human-cognitive dimension affects every phase, including how practitioners interpret bias metrics and decide what constitutes acceptable performance.
Related: 05-atom—trustworthy-ai-characteristics, 07-atom—human-ai-teaming-bias-amplification