Conceptual Coherence Improves LLM Performance
The Principle
LLMs perform better when given inputs that cohere conceptually, not just inputs that are smaller.
Why This Matters
It’s tempting to treat LLM context limitations as purely a size problem, solved by chunking or summarization. But the research on modular ontology work suggests something more interesting: the conceptual boundaries of what you provide matter as much as the quantity.
A module isn’t just a smaller piece of an ontology. It’s a piece that makes sense as a unit, where the classes, properties, and relationships “belong together” from a domain expert’s perspective. This conceptual coherence appears to provide better “priming” for the LLM, enabling more accurate pattern matching and inference.
How to Apply
When preparing structured input for LLM processing:
-
Identify natural conceptual boundaries: Where would a domain expert draw lines? What pieces make sense as standalone units?
-
Preserve coherence over completeness: Better to give a complete conceptual unit than fragments of multiple units.
-
Name your modules meaningfully: Module names that capture the conceptual essence help with selection and reference.
-
Test at the module level: If LLM performance is poor, the problem may be conceptual fragmentation rather than context size.
Exceptions and Limits
This principle applies most strongly to structured, domain-specific tasks. For open-ended generation or creative tasks, the benefit of conceptual coherence may be less pronounced.
The principle also assumes the existence of meaningful conceptual boundaries. Some domains are more naturally modular than others.
Related: 05-molecule—two-stage-modular-prompting, 06-atom—ontology-design-pattern