The LLM-KG Paradigm Inversion

The relationship between large language models and knowledge graphs is inverting.

Early work framed LLMs as tools for building better knowledge graphs, automating extraction, improving entity resolution, scaling ontology construction. The knowledge graph was the goal; the LLM was the means.

Current work increasingly frames knowledge graphs as infrastructure for grounding LLMs, providing factual anchoring, structured memory, and reasoning scaffolding. The LLM is the goal; the knowledge graph is the means.

This isn’t just a shift in tooling. It’s a reconception of what knowledge graphs are for. They’re evolving from static repositories for human interpretation to dynamic substrates that support machine reasoning.

Related:, 07-molecule—vectors-vs-graphs