The Last Vote: A Multi-Stakeholder Framework for Language Model Governance

Sahoo & Chhawacharia (2025)

Core Argument

Current AI governance suffers from “technocratic reductionism” (treating governance as an optimization problem rather than a question of democratic authorization. The authors propose treating democratic integrity as a primary optimization objective rather than a side-constraint.

Key Contributions

  1. Seven-category democratic risk taxonomy extending beyond individual-level harms to capture systemic threats
  2. Incident Severity Score (ISS) - stakeholder-adaptive metric aggregating heterogeneous utilities into governance signals
  3. Four-phase, six-year implementation roadmap transitioning from voluntary coordination to binding democratic oversight
  4. Operationalized deliberative democracy through institutionalized co-governance, citizen panels, and sovereignty zones

The Reframe

Existing frameworks (EU AI Act, US executive orders) focus on individual harms and discrete technical risk vectors. This misses how scaled generative systems destabilize democratic legitimacy through structural modalities, cascading effects, path dependencies, procedural erosion.

The paper treats AI as socio-technical infrastructure that redistributes epistemic authority and encodes normative commitments. Governance must address this constitutive politicality.

Seven Risk Categories

  1. Discrimination & Democratic Exclusion
  2. Privacy Erosion & Democratic Surveillance
  3. Electoral Misinformation & Discourse Degradation
  4. Democratic Manipulation & Malicious Interference
  5. Civic Participation & Human Agency Loss
  6. Democratic Power Concentration
  7. Systemic Democratic Fragility

Extracted To