AI Risks Differ from Traditional Software Risks

AI systems present risks that existing software risk frameworks don’t adequately address:

Data dependency: Training data may not represent intended context. Ground truth may not exist or be available. Data can become stale relative to deployment context.

Training mutability: Intentional or unintentional changes during training can fundamentally alter system performance in ways traditional software doesn’t experience.

Scale and complexity: Systems contain billions or trillions of decision points, housed within more traditional applications.

Pre-trained model risks: Transfer learning advances research but increases statistical uncertainty, bias management challenges, and reproducibility issues.

Emergent property unpredictability: Large-scale pre-trained models exhibit emergent behaviors that are difficult to predict or anticipate failure modes for.

Enhanced inference capability: AI can identify individuals or previously private information through aggregation, creating novel privacy risks.

Maintenance triggers: More frequent maintenance needed due to data drift, model drift, and concept drift.

Testing gaps: Software testing standards underdeveloped for AI. Traditional code testing approaches don’t transfer cleanly.

Side effect opacity: Inability to predict or detect side effects beyond statistical measures.

Related: 05-atom—ai-risk-definition, 05-atom—ai-risk-measurement-challenges