Democratic Risk Taxonomy for AI Systems
Overview
A seven-category framework for assessing AI risks to democratic institutions. Extends beyond individual-level harms (fairness, privacy) to capture systemic threats that affect democratic processes and institutional stability.
The Seven Categories
1. Discrimination & Democratic Exclusion
Systematic exclusion of communities from democratic participation, not just individual unfair treatment. Algorithmic discrimination in voting access, civic service delivery, and representation in democratic processes.
2. Privacy Erosion & Democratic Surveillance
AI-assisted surveillance enabling unprecedented monitoring of political associations and communications. Chilling effects on free expression and opposition organizing extend privacy concerns into democratic participation rights.
3. Electoral Misinformation & Discourse Degradation
Computational propaganda, hyper-personalized misinformation, systematic degradation of civic discourse quality. Distinct from general misinformation because it specifically targets electoral processes and democratic deliberation.
4. Democratic Manipulation & Malicious Interference
Coordinated attacks on democratic institutions: large-scale electoral interference, voter suppression, systematic manipulation of democratic processes. Beyond individual fraud to institutional threats.
5. Civic Participation & Human Agency Loss
Algorithmic curation creating echo chambers and filter bubbles, delegation of civic decision-making to automated systems. Reduces meaningful human participation in democracy through information environment design.
6. Democratic Power Concentration
Capital and data requirements concentrating power among few actors. Democratic institutions becoming dependent on private entities for critical functions. Infrastructure dependencies delegating consequential decisions to unaccountable entities.
7. Systemic Democratic Fragility
Emergent behaviors from complex AI system interactions threatening democratic stability. Cascade failures, unintended coordination effects, goal misalignment with democratic oversight. Novel risks to institutional stability.
When to Use This Framework
- Evaluating AI systems that touch democratic processes
- Risk assessment for public-sector AI deployments
- Identifying blind spots in compliance-oriented governance approaches
- Moving beyond individual fairness metrics to structural analysis
Limitations
- Derived from Western democratic theory, may require adaptation for other contexts
- Categories can overlap and interact
- Quantification of these risks remains methodologically challenging
- Doesn’t address pace mismatch between AI development and democratic deliberation
Related: 05-atom—democratic-integrity-as-objective, 05-atom—systemic-vs-individual-ai-harms, 05-molecule—stakeholder-adaptive-scoring