Governance as Optimization vs. Authorization
The Two Framings
Governance as optimization: AI governance is a technical problem. Define risk metrics, set thresholds, enforce compliance. The goal is to constrain AI systems within acceptable bounds.
Governance as authorization: AI governance is a political problem. Determine who decides, through what processes, with what legitimacy. The goal is to ensure AI systems serve democratically authorized purposes.
Key Differences
| Dimension | Optimization Frame | Authorization Frame |
|---|---|---|
| Primary question | Does this system meet standards? | Who authorized this system to decide? |
| Success metric | Compliance rate | Democratic legitimacy |
| Expertise role | Technical experts set and enforce rules | Multiple knowledge types contribute |
| Stakeholder involvement | Consultation | Co-governance |
| Democratic values | Constraints on optimization | The objective itself |
The Information Architecture Connection
Format shapes cognition. How governance frameworks structure the governance problem shapes what solutions become thinkable.
Optimization framing produces:
- Risk taxonomies focused on measurable harms
- Compliance checklists
- Technical audit requirements
- Expert-driven assessment
Authorization framing produces:
- Participation mechanisms
- Legitimacy assessments
- Stakeholder deliberation processes
- Coalition-building requirements
Both can be rigorous. Both can fail. But they fail in different ways and miss different things.
When Each Applies
Optimization works when:
- Harms are individual and measurable
- Technical expertise captures relevant risks
- Democratic authorization can be assumed (established public mandate)
- Speed matters more than participation
Authorization required when:
- Harms are systemic and structural
- Technical metrics miss experiential knowledge
- Legitimacy is contested
- AI systems redistribute power in ways that affect who decides
Related: 02-atom—format-shapes-cognition, 05-atom—ai-redistributes-epistemic-authority, 05-molecule—democratic-risk-taxonomy