Human Agency Scale Framework

Overview

A shared vocabulary for describing the spectrum between AI automation and human-AI collaboration. Unlike automation-focused frameworks (like SAE driving levels), HAS centers human agency as the organizing principle.

The Five Levels

H1: Full Automation

AI agent handles the task entirely on its own without human involvement.

Team dynamic: AI drives task completion
Example tasks: Transcribe data to worksheets, run monthly network reports
AI role: Replace human capabilities

H2: Minimal Human Input

AI agent needs human input at a few key points to achieve better task performance.

Team dynamic: AI drives with checkpoints
Example tasks: Devise trading strategies, accept payment on accounts
AI role: Replace with supervision

H3: Equal Partnership

AI agent and human work together to outperform either alone.

Team dynamic: True collaboration
Example tasks: Create game features including storylines, compile and analyze experimental data
AI role: Enhance human capabilities

H4: Human-Led with AI Support

AI agent needs human input to successfully complete the task.

Team dynamic: Human drives with AI assistance
Example tasks: Coordinate financial planning and budgeting, design training programs
AI role: Augment human work

H5: Essential Human Involvement

Task completion fully relies on human involvement.

Team dynamic: Human drives task completion
Example tasks: Participate in forums to stay current, direct client-facing activities
AI role: Support at margins

Why This Matters

Higher HAS levels aren’t inherently better, different levels suit different contexts. The framework helps:

  1. Developers design agents appropriate to the task (H1 tasks need autonomy; H3 tasks need coordination interfaces)
  2. Workers understand where their agency matters most
  3. Organizations set realistic expectations for AI deployment
  4. Researchers study human-AI collaboration systematically

Key Finding

45.2% of occupations have H3 as the dominant worker-desired level. This suggests workers broadly favor partnership over replacement, even for tasks that could be fully automated.

When to Use

  • Designing AI agent interfaces and interaction patterns
  • Setting expectations for AI adoption
  • Identifying which tasks warrant autonomous vs. collaborative AI approaches
  • Workforce planning and skill development

Limitations

  • Self-reported preferences may not match revealed preferences
  • Level assignment requires judgment calls
  • Task boundaries aren’t always clean
  • Worker exposure to AI capabilities varies widely

Related: 07-molecule—desire-capability-landscape, 07-atom—automation-vs-augmentation, 07-atom—worker-centered-ai-development-question