AI Governance Analogues Comparison
Overview
Four historical technologies offer lessons for AI governance: nuclear technology, the Internet, encryption products, and genetic engineering. Each has different characteristics that determine which lessons transfer.
The Four Analogues
Nuclear Technology
Characteristics: High barriers to entry, dual-use, controllable physical assets (fissile materials, delivery systems), consensus on catastrophic risk.
Governance model: International bodies (IAEA), treaties, export controls on materials and delivery systems.
When it applies to AI: If AI requires substantial resources, poses agreed-upon catastrophic risks, and involves physical assets that can be monitored. Frontier models with massive compute requirements might fit.
Key limitation: Nuclear governance focused on physical assets because governing knowledge alone failed. AI’s end products (model weights, code) are nonphysical.
The Internet
Characteristics: Government as funder-facilitator, private sector leads development, minimal safety concerns at origin, culture of open collaboration.
Governance model: Private sector-led standards bodies (IETF, W3C), consensus-based through RFCs, government deliberately delegated control.
When it applies to AI: If AI development poses minimal risks to safety or security. Government funds and facilitates; private sector innovates and self-governs.
Key limitation: The Internet’s norms emerged before safety concerns materialized. AI governance discussions have safety concerns from the start. Also: Internet norms may not apply to competitors who don’t share them.
Encryption Products
Characteristics: Nonphysical end products, dual-use, low barriers to entry, government initially held expertise.
Governance model: Export controls, classification as munitions, ultimately dismantled after consensus eroded.
What it teaches: Cautionary tale. Governance without stakeholder consensus fails. Controls on nonphysical assets don’t stop motivated actors. Governance that pits national security against economic security harms both.
Key lesson: Don’t create policy dilemmas where security and economic interests conflict. Economic strength is national security.
Genetic Engineering
Characteristics: Safety risks recognized by practitioners themselves, physical assets (cell cultures, embryos) can be controlled, scientific community has shared norms.
Governance model: Voluntary moratoria, coordination by trusted bodies (National Academies, WHO), guidelines reflect community consensus.
When it applies to AI: If risks are recognized, a scientific community with shared norms exists, and trusted bodies can coordinate consensus-building. Works when scope is narrow and focused on technical/safety issues rather than ethics.
Key limitation: Consensus on genetic engineering fractured when ethical questions joined safety questions. International governance for reproductive cloning failed for this reason.
Matching Conditions
| Condition | Best Analogue |
|---|---|
| High barriers, catastrophic risk, physical assets | Nuclear |
| Minimal safety risk, private sector leads | Internet |
| Low barriers, nonphysical assets, safety risk | Genetic Engineering |
| (Avoid) Nonphysical assets, contested norms | Encryption (cautionary tale) |
The Meta-Lesson
The question “What is AI like?” determines which governance model fits. AI isn’t uniformly like any of these, different AI applications may map to different analogues.
Frontier models requiring massive compute might map to nuclear. Narrow applications with minimal safety concerns might map to Internet. Open-source models with potential misuse might map to genetic engineering.
The risk is treating AI as monolithic and picking the wrong analogue.
Related: 07-atom—collingridge-dilemma, 05-atom—physical-vs-nonphysical-governance, 05-atom—consensus-erosion-pattern