AI-Assisted Workflow Economics

The Framework

A model for calculating the time and cost savings of AI assistance in knowledge work, accounting for review overhead, failure rates, and fallback to human completion.

Why It Matters

Naive comparisons of “AI completion time vs. human completion time” dramatically overstate potential savings. When you factor in human review time and the probability of unsatisfactory outputs requiring redo, the economics shift significantly.

How It Works

Three workflow scenarios with different economics:

Scenario 1: Direct Use

  • Use model output directly
  • Fastest, but quality varies
  • Risk: catastrophic errors (3% of model failures rated “catastrophic”)

Scenario 2: Try Once, Then Fix

  • Sample from model, review output
  • If unsatisfactory, human completes task
  • Formula: Model time + Review time + (1 - win rate) × Human time

Scenario 3: Try N Times, Then Fix

  • Sample repeatedly, reviewing each time
  • Human steps in only after N failed attempts
  • As N → ∞ with win rate > 0: approaches (Model time + Review time) / win rate

When to Apply

Use this framework when evaluating AI tool adoption for any knowledge work task. The key variables are:

  • Win rate (how often model output meets quality bar)
  • Review time (overhead to assess each output)
  • Human completion time (fallback cost)

Limitations

  • Assumes review time is fixed (in practice, it may decrease with experience)
  • Doesn’t capture cost of catastrophic errors in high-stakes domains
  • Win rate may improve with prompt iteration (not captured in simple models)
  • Doesn’t account for partial human editing of model outputs

Current best frontier models (Claude Opus 4.1 at ~48% win rate) can save time and money under these models, but the savings are modest compared to naive “300x faster” claims.

Related: 05-atom—expert-parity-trajectory, 01-atom—human-in-the-loop