Human-AI Teaming Can Amplify Bias

Under certain conditions, particularly in perceptual-based judgment tasks, the AI component of human-AI interaction can amplify human biases, leading to more biased decisions than either the AI or human alone.

The mechanism: humans may defer to AI outputs that confirm their existing biases while overriding outputs that challenge them. AI systems, in turn, may have learned patterns that encode historical biases.

The interaction creates a feedback loop that neither party would produce independently.

The inverse is also true: when variations are judiciously accounted for in organizing human-AI teams, they can result in complementarity and improved overall performance.

This suggests that human-AI configuration design requires explicit attention to bias dynamics, not just technical debiasing of the model, and not just human training, but careful orchestration of how decisions flow between human and machine.

Related: 05-atom—three-categories-ai-bias, 05-atom—automation-bias-regulatory-recognition