Systemic Risk (AI Definition)
Under the EU AI Act, systemic risk means: a risk specific to the high-impact capabilities of general-purpose AI models, having significant impact on the EU market due to reach or due to actual/foreseeable negative effects on public health, safety, public security, fundamental rights, or society as a whole, that can be propagated at scale across the value chain.
The definition emphasizes propagation at scale. Systemic risk isn’t just individual harm multiplied, it’s harm that amplifies through interconnected systems.
A model is presumed to pose systemic risk if trained with compute exceeding 10²⁵ floating-point operations. This threshold is a proxy for capability, chosen for operational clarity rather than scientific precision.
The Commission can also designate models as posing systemic risk based on capability assessments, independent of compute metrics.
Related: 05-molecule—general-purpose-ai-governance, 05-molecule—risk-based-ai-classification