Model Entanglement
Model entanglement occurs when machine learning models affect one another during training and tuning, even when software teams intend for them to remain isolated.
In traditional software, modularity means one component’s behavior doesn’t change based on another component’s implementation, they interact only through defined APIs. In ML systems, models can influence each other’s effectiveness through shared data, feedback loops, or downstream dependencies, regardless of whether their code is kept separate.
This creates two practical problems:
-
Team coordination burden: Even if separate teams build each model, they must collaborate closely during training and maintenance. Conway’s Law, where system architecture mirrors organizational structure, breaks down because the models won’t respect the organizational boundaries.
-
Hidden dependencies: Changes to one model can degrade another model’s performance in ways that aren’t obvious from the code or architecture diagrams.
Entanglement is one reason why ML systems accumulate technical debt faster than traditional software systems, the interdependencies are harder to see and manage.
Related: 05-atom—non-monotonic-error-propagation, 05-atom—three-ml-engineering-differences