ARTFEED — Contemporary Art Intelligence

MeMo: Modular Memory Framework for LLMs

ai-technology · 2026-05-16

A new modular framework called MeMo (Memory as a Model) has been developed by researchers, allowing large language models (LLMs) to assimilate new information without the need for retraining. This system encodes fresh data into a specialized memory model while maintaining the original parameters of the LLM. MeMo effectively captures intricate relationships across documents, is resilient to retrieval noise, prevents catastrophic forgetting, and does not require access to the LLM's weights or logits, making it compatible with both open-source and proprietary LLMs. Its retrieval cost remains unaffected by the size of the corpus during inference. The framework demonstrated strong results on three benchmark datasets and is detailed in a paper available on arXiv with the identifier 2605.15156.

Key facts

  • MeMo stands for Memory as a Model
  • It is a modular framework for LLMs
  • Encodes new knowledge into a dedicated memory model
  • Keeps LLM parameters unchanged
  • Captures complex cross-document relationships
  • Robust to retrieval noise
  • Avoids catastrophic forgetting
  • Does not require access to LLM weights or logits
  • Retrieval cost independent of corpus size at inference
  • Tested on three benchmark datasets
  • Paper available on arXiv: 2605.15156

Entities

Institutions

  • arXiv

Sources