Memini: Multi-Timescale Memory for Continual LLM Knowledge Updating
A new paper on arXiv proposes Memini, an external memory system for large language models (LLMs) that mimics biological memory dynamics. Unlike traditional static or explicitly managed memory, Memini uses a directed graph where each edge has two coupled internal variables—one fast, one slow—following the Benna-Fusi model of synaptic consolidation. This design allows episodic sensitivity, gradual consolidation, and selective forgetting to emerge from a single mechanism, enabling LLMs to continuously update knowledge without retraining. The paper argues that external memory should function as a learning substrate that reorganizes through its own dynamics, rather than being a static repository. The work is categorized under Computer Science > Machine Learning and was submitted to arXiv on May 9, 2025.
Key facts
- Paper titled 'Continual Knowledge Updating in LLM Systems: Learning Through Multi-Timescale Memory Dynamics'
- Proposes Memini, an associative memory system for LLMs
- Memory organized as a directed graph
- Each edge has two coupled internal variables: fast and slow
- Uses Benna-Fusi model of synaptic consolidation
- Enables episodic sensitivity, gradual consolidation, and selective forgetting
- Aims to allow LLMs to update knowledge without retraining
- Submitted to arXiv on May 9, 2025
Entities
Institutions
- arXiv