Computational Threshold Regulation
California’s SB 1047 proposed a novel regulatory approach: target AI systems by the resources used to train them rather than by their application domain.
The bill set thresholds at 10M/10^25 flops for fine-tuning. Models exceeding these thresholds would face testing, registration, and audit requirements regardless of what they’re used for.
This reflects a regulatory philosophy that the technical potential of AI systems, measured by training compute, predicts risk better than application context. A powerful general-purpose model poses potential dangers across many use cases, so regulate at the capability level rather than the deployment level.
The approach sparked debate. Supporters argued it focuses on the most powerful systems that could cause widespread harm. Critics worried it would favor established tech giants (who can absorb compliance costs) over startups, stifle innovation, and target the wrong proxy for actual risk.
Related: 05-atom—prohibition-vs-permission-regulatory-models