AI Memory Systems Are Just Lookup, Not True Memory
A recent study published on arXiv critiques existing agentic memory systems, such as vector stores, retrieval-augmented generation, scratchpads, and context-window management, asserting that they do not constitute genuine memory but rather function as look-up mechanisms. The authors argue that equating lookup with memory is a fundamental misunderstanding that has significant implications for an agent's abilities, long-term learning, and security. They highlight that retrieval relies on similarities to past cases, while weight-based memory uses abstract rules for new inputs. This confusion leads to agents that gather notes without gaining expertise and face limitations on novel tasks, which cannot be resolved by merely increasing context size or retrieval efficiency. Additionally, these systems are prone to persistent memory poisoning, as harmful content can spread across future interactions. The paper references Complementary Learning Systems theory from neuroscience, illustrating how biological intelligence addresses this issue by integrating complementary learning systems. The preprint can be found on arXiv with ID 2604.27707.
Key facts
- Current agentic memory systems implement lookup, not memory.
- Treating lookup as memory is a category error with provable consequences.
- Retrieval generalizes by similarity; weight-based memory generalizes by abstract rules.
- Conflating the two produces agents that accumulate notes without expertise.
- Agents face a provable generalization ceiling on compositionally novel tasks.
- No increase in context size or retrieval quality can overcome this ceiling.
- Systems are structurally vulnerable to persistent memory poisoning.
- Paper draws on Complementary Learning Systems theory from neuroscience.
Entities
Institutions
- arXiv