The CACE Principle
Changing Anything Changes Everything.
In ML systems, no inputs are ever truly independent. Changing the input distribution of one feature affects the importance, weights, and use of all other features. This holds whether the model is retrained in batch or adapts online.
CACE applies beyond input signals: hyperparameters, learning settings, sampling methods, convergence thresholds, data selection, essentially every possible adjustment propagates through the system in ways that are difficult to predict and impossible to isolate.
This makes incremental improvement fundamentally harder than in traditional software, where changes can often be contained behind abstraction boundaries.
Related:, 05-molecule—ml-technical-debt-taxonomy