Leading vs. Lagging Indicators for AI Economic Impact
The Comparison
Two fundamentally different approaches to measuring AI’s economic significance:
Leading Indicators (Capability-Based)
- AI capability evaluations and benchmarks
- Performance on task-based assessments
- Head-to-head comparisons with human experts
- Measures: What AI can do in controlled conditions
Lagging Indicators (Adoption-Based)
- Adoption rates and usage patterns
- GDP growth attributed to AI
- Productivity statistics
- Labor market shifts
- Measures: What AI does in the wild
Key Differences
| Dimension | Leading | Lagging |
|---|---|---|
| Timing | Immediate | Delayed years/decades |
| Attribution | Clear (controlled test) | Confounded (many factors) |
| Realism | Lower (optimized conditions) | Higher (messy reality) |
| Actionability | High for capability providers | High for policy makers |
| Gaming risk | High (benchmark optimization) | Lower (harder to fake GDP) |
Why This Matters
Historical evidence from electricity, computers, and airplanes shows that invention-to-permeation transitions take years or decades. Waiting for lagging indicators means missing the window for proactive response.
But capability benchmarks have their own distortions: optimized test conditions, precisely specified tasks, and the gap between “can do” and “does do in practice.”
When Each Applies
Use leading indicators when:
- Forecasting potential disruption
- Making infrastructure investments
- Capability providers claiming relevance
Use lagging indicators when:
- Assessing actual economic impact
- Justifying policy interventions
- Measuring real productivity gains
The most honest position: use both, acknowledge the gap, and be skeptical of anyone claiming certainty in either direction.
Related:, 05-atom—context-specification-gap