LLM-as-Oracle vs LLM-as-Aligner
Two fundamentally different approaches to using LLMs in knowledge engineering pipelines:
LLM-as-Aligner: The LLM performs the entire task, generating all candidate mappings, ranking them, selecting the best. This is expensive (many API calls), brittle (dependent on LLM’s complete knowledge), and often unnecessary.
LLM-as-Oracle: A traditional system does the heavy lifting, then calls the LLM only for cases where it’s uncertain. The LLM validates specific candidates rather than generating the full solution space.
The key insight: LLMs don’t need to replace traditional systems. They can augment them at the points of highest uncertainty, where traditional methods struggle and LLM judgment adds the most value.
This distinction applies beyond ontology alignment, to any pipeline where you’re tempted to “just use AI for everything.”
Related: 05-molecule—targeted-llm-intervention-pattern, 01-atom—human-in-the-loop