How Do You Balance Expertise and Democratic Participation in AI Oversight?
AI systems require specialized expertise for effective evaluation. Democratic legitimacy requires broad participation. These pull in different directions.
Too much expertise: governance captures by technical elites, legitimacy deficits, missed experiential knowledge from affected communities.
Too much participation: governance paralysis, technical assessments get overridden by uninformed opinion, bad actors exploit deliberation processes.
The interesting question isn’t “which should win” but “what institutional designs make both tractable?” Graduated participation based on risk level? Structured deliberation that builds shared technical understanding? Different stakeholder roles (advisory vs. binding authority) for different knowledge types?
This is an open problem without clean solutions.
Related: 05-atom—expertocracy-problem, 05-molecule—stakeholder-adaptive-scoring, 05-atom—democratic-integrity-as-objective