Where Else Does “Ask the LLM Only When Uncertain” Apply?
Ontology alignment showed that using LLMs only for uncertain cases, rather than for the entire task, yields better cost-efficiency and comparable quality to human experts.
What other knowledge engineering tasks follow this pattern?
Candidates to explore:
- Data quality validation (flag anomalies, LLM validates edge cases)
- Entity resolution (traditional blocking + LLM verification of ambiguous matches)
- Taxonomy maintenance (automated structure + LLM review of proposed changes)
- Document classification (ML classifier + LLM second opinion on low-confidence assignments)
- Schema evolution (diff detection + LLM assessment of breaking changes)
The common structure: Traditional system handles the bulk confidently, surfaces uncertainty explicitly, LLM provides targeted judgment at decision boundaries.
Related: 05-molecule—targeted-llm-intervention-pattern, 05-atom—llm-as-oracle-vs-aligner