GAI Risks Originate Primarily from Human Behavior

Many GAI risks stem not from the model itself but from human factors: abuse, misuse, unsafe repurposing by humans (adversarial or not), and emergent issues from human-AI interactions.

This reframes the problem space. Technical interventions that focus solely on model behavior miss a substantial portion of the risk surface. The design of human-AI interactions, organizational governance, and deployment context often matter more than model architecture.

Risks from model or system factors include confabulation, bias amplification, and capability limitations. But the highest-consequence risks frequently emerge from how humans choose to deploy, interpret, and act upon GAI outputs.

Related: 01-molecule—human-ai-configuration