ARTFEED — Contemporary Art Intelligence

BeliefMem: LLM Agent Memory with Probabilistic Beliefs

ai-technology · 2026-05-09

A new AI research paper, 'Belief Memory: Agent Memory Under Partial Observability,' introduces BeliefMem, a memory system for large language model (LLM) agents that addresses the problem of self-reinforcing errors from deterministic memory. Existing methods store each observation as a single conclusion, discarding uncertainty and causing agents to act on potentially incorrect inferences. BeliefMem retains multiple candidate conclusions per observation, each with a probability updated via Noisy-OR rules as new data arrives. This allows agents to revisit alternatives and maintain uncertainty. The paper is available on arXiv under ID 2605.05583.

Key facts

  • BeliefMem is a new memory paradigm for LLM agents.
  • It stores multiple candidate conclusions per observation with probabilities.
  • Probabilities are updated using Noisy-OR rules.
  • It addresses self-reinforcing errors from deterministic memory.
  • The paper is on arXiv with ID 2605.05583.
  • Existing methods store each observation as a single deterministic conclusion.
  • BeliefMem allows agents to revisit alternative conclusions.
  • The approach maintains uncertainty over time.

Entities

Institutions

  • arXiv

Sources