Any Performance Information Beats None
Providing users with any information about an AI system’s performance improves task outcomes compared to providing no information, even if that information is as simple as “this model is 80% accurate.”
In the Behzad et al. study, all conditions with performance feedback (overall accuracy, confidence scores, contextual awareness) outperformed the prediction-only condition on task performance. The differences between the various types of feedback were more nuanced, but the gap between some feedback and no feedback was consistent.
This suggests a low-hanging-fruit intervention: if your AI system currently just shows predictions without any performance context, adding even a simple accuracy statement may meaningfully improve human-AI teaming outcomes.
The more interesting question is which kind of performance information produces the best calibration, but that’s a refinement question. The baseline insight is that transparency about performance, in any form, helps.
Related: 01-atom—calibrated-trust-vs-high-trust, 01-molecule—performance-feedback-spectrum, 07-molecule—ui-as-ultimate-guardrail