Parametric vs. Non-Parametric Memory
AI systems can store knowledge in two fundamentally different ways, each with distinct tradeoffs.
Parametric memory stores knowledge implicitly in model weights. It’s accessed through forward passes, can’t be easily inspected or updated, and the model can’t distinguish high-confidence from low-confidence knowledge. Updates require retraining.
Non-parametric memory stores knowledge explicitly in an external corpus accessed via retrieval. It’s inspectable (you can see what was retrieved), updateable (change the corpus, change the knowledge), and provides provenance (outputs can cite sources).
RAG systems deliberately combine both: the generator has parametric knowledge about language and reasoning patterns, while the retriever provides non-parametric access to factual content. This separates “how to think” from “what to know.”
The distinction maps to Nonaka’s knowledge types: parametric memory resembles internalized tacit knowledge; non-parametric memory resembles externalized explicit knowledge.
Related: 05-atom—rag-definition