Linear to Graph: The Evolution of LLM Reasoning Structures
Overview
Prompted reasoning has evolved through three geometric stages, each enabling qualitatively different capabilities.
The Three Stages
Chain-of-Thought (2022)
Structure: Linear sequence of reasoning steps
Core insight: Showing intermediate steps elicits intermediate reasoning
Enables: Multi-step arithmetic, basic logical reasoning
Limitation: No backtracking, no exploration, single path to answer
Tree-of-Thoughts (2023)
Structure: Branching paths with evaluation and pruning
Core insight: Thoughts can be scored for progress, enabling search
Enables: Exploration, look-ahead, deliberate backtracking
Limitation: Branches are independent, no combining insights across paths
Graph-of-Thoughts (2023)
Structure: Directed graph with arbitrary connections
Core insight: Human thinking is non-linear; ideas connect across branches
Enables: Aggregating partial solutions, combining insights, dynamic interplay
Limitation: Higher computational cost, more complex orchestration
Key Differences
| Aspect | Chain | Tree | Graph |
|---|---|---|---|
| Exploration | None | Yes | Yes |
| Backtracking | No | Yes | Yes |
| Cross-path combination | No | No | Yes |
| Computational cost | Low | Medium | High |
| Implementation complexity | Simple | Moderate | Complex |
When Each Applies
- Chain: Single clear solution path, compute-constrained environments
- Tree: Problems requiring exploration, uncertain solution paths, “Game of 24” style puzzles
- Graph: Complex synthesis tasks, research questions, problems requiring insight combination
The Deeper Pattern
This evolution reflects a convergence toward how humans actually reason, not in clean linear steps, but through messy exploration, dead ends, and unexpected connections. The geometry of reasoning is not a technical detail; it determines what’s thinkable.
Related: 02-atom—format-shapes-cognition, 05-molecule—chain-of-thought-prompting