Confabulation vs. Hallucination
“Hallucination” and “fabrication” anthropomorphize AI systems, they inappropriately attribute human cognitive states to statistical prediction engines.
NIST explicitly rejects these terms in favor of “confabulation” because:
-
Hallucinations imply subjective experience. Humans hallucinate because they have perceptual systems that can malfunction. LLMs don’t perceive anything, they predict tokens.
-
The term obscures the mechanism. Calling outputs “hallucinations” makes them seem like bugs or errors. Confabulation is a natural result of how generative models work, they produce statistically plausible outputs, not verified truths.
-
Anthropomorphization itself is a risk. Framing AI in human terms leads users to expect human-like judgment, reliability, or understanding. This contributes to over-trust and automation bias.
The distinction matters for risk management: confabulation is a feature of the architecture, not a failure to be patched. Systems designed around statistical prediction will always produce some proportion of confident-but-false outputs.
Related: 05-atom—confabulation-definition, 01-molecule—human-ai-configuration, 05-atom—uniform-confidence-problem