The Reification Gap in LLM Modeling

LLMs handle simple property modeling well but struggle with reification, turning relationships into first-class entities that can themselves have properties.

In ontology generation benchmarks, simple data and object property questions achieved 90%+ accuracy. Reification patterns dropped to 66% or lower, and restrictions (cardinality constraints, value constraints) performed worst at near-zero for some techniques.

Reification requires treating a relationship as a thing. “John works at Acme” becomes an Employment entity with its own start date, role, and department. This transformation from binary predicate to n-ary pattern seems to be where LLM modeling capabilities break down.

The gap matters because real-world knowledge is full of reified relationships. Employment, authorship, measurements, transactions, all require modeling the relationship itself, not just connecting two entities. Systems that can only handle simple properties miss where most of the complexity lives.

Related:, 06-molecule—ontology-design-patterns