Comparative Global AI Regulation: Policy Perspectives from the EU, China, and the US

Citation: Chun, J., Schroeder de Witt, C., & Elkins, K. (2024). Comparative Global AI Regulation: Policy Perspectives from the EU, China, and the US. arXiv:2410.21279.

Core Framing

The paper uses AI regulatory comparison as a lens to surface deeper cultural and political differences between major powers. Rather than treating regulation as purely technical policy, the authors frame regulatory choices as reflecting fundamental worldview differences about the role of government, market dynamics, and risk tolerance.

Three analytical tensions structure the comparison:

  • Safety vs. innovation priorities
  • Cooperation vs. competition orientations
  • Trust in centralized authority vs. decentralized free markets

Key Arguments

EU approach: Top-down, risk-based, prevention-oriented. The EU AI Act creates a comprehensive classification system (prohibited, high-risk, limited-risk, minimal-risk) with centralized enforcement through the new AI Office. Regulatory philosophy is “prohibit unless explicitly permitted.”

China approach: Hybrid model combining centralized guidance with decentralized innovation. Appears top-down but in practice emphasizes regional competition and “Little Giants” (innovative SMEs) operating with lighter enforcement. Use-case specific regulations (deepfakes, recommendation algorithms, generative AI) rather than comprehensive framework.

US approach: Decentralized, market-driven, permissive. Executive Order 14110 coordinates existing agencies rather than creating new centralized authority. Regulatory philosophy is “permit unless explicitly prohibited.” States like California (SB 1047) experimenting with stronger regulation targeting frontier models by computational thresholds.

Notable Concepts

  • Brussels Effect: EU regulations influencing global standards through market power, similar to GDPR’s global influence
  • GPAI (General Purpose AI): EU’s regulatory category for foundation models with broad capabilities
  • Computational threshold regulation: SB 1047’s approach of targeting models by training cost ($100M+) rather than application domain
  • Regulatory sandboxes: EU provision for controlled testing environments to encourage innovation within regulatory framework
  • “Little Giants” approach: China’s informal lighter-touch enforcement for innovative SMEs

Strategic Value

The paper’s framing offers a useful mental model for understanding why regulatory approaches differ, not just how they differ. This is valuable for:

  • Anticipating how regulations might evolve in each jurisdiction
  • Understanding the political economy of AI governance debates
  • Designing products that navigate multiple regulatory environments

Extracted Content