Stakeholder-Adaptive Risk Scoring
The Principle
Risk assessment should aggregate heterogeneous stakeholder perspectives rather than privileging technical expertise alone. Different groups possess non-substitutable forms of knowledge.
Why This Matters
Traditional risk scoring treats assessment as a technical exercise: experts evaluate systems against objective criteria. This approach produces legitimacy deficits because it excludes knowledge that can’t be reduced to technical metrics.
Seven categories of expertise should inform AI risk assessment:
- Technical practitioners: Feasibility assessments, capability boundaries
- Academic researchers: Interdisciplinary safety analysis, long-term perspectives
- Democratic representatives: Electoral legitimacy, constitutional compatibility
- Civil society organizations: Public-interest advocacy, value-sensitive oversight
- Industry participants: Market dynamics, implementation costs
- Affected communities: Experiential evidence of algorithmic harms
- International partners: Transnational coordination, cross-jurisdictional effects
Each contributes knowledge the others can’t provide.
How to Apply
Graduated participation based on risk level:
- Low-risk: Enhanced consultation and transparency (public comment, hearings)
- Medium-risk: Structured deliberation (citizen panels, stakeholder workshops)
- High-risk: Binding co-governance (formal decision-making authority, veto rights, appeals)
Weight aggregation: Stakeholder inputs aggregated through explicit, contestable mechanisms rather than hidden in technical assumptions. The goal is to make political choices visible, not to achieve false neutrality.
When This Especially Matters
- Any AI deployment affecting democratic processes
- Public-sector adoption of AI systems
- High-stakes applications where technical expertise alone produces incomplete risk pictures
- Contexts where affected communities have experiential knowledge that formal audits miss
Limitations
Creating single scores from heterogeneous inputs is inherently political, the mechanism encodes choices about whose knowledge counts. This is intentional: the alternative is hiding those choices in technical assumptions. But it requires transparency about aggregation methods.
Related: 05-atom—expertocracy-problem, 05-molecule—democratic-risk-taxonomy, 05-atom—democratic-integrity-as-objective