ARTFEED — Contemporary Art Intelligence

EquiMem: Game-Theoretic Calibration for Shared Memory in Multi-Agent Debate

ai-technology · 2026-05-12

A new paper on arXiv (2605.09278) introduces EquiMem, a mechanism to calibrate shared memory in multi-agent debate (MAD) systems. The authors identify a vulnerability where a single corrupted memory entry can contaminate reasoning, and existing safeguards relying on LLM-based validation share the same failure modes. They formulate memory updating as a zero-trust memory game, where no agent is assumed honest, and the game's equilibrium indicates optimal memory trust. EquiMem quantifies each update algorithmically using agents' retrieval queries and traversal paths as evidence, without soliciting LLM judgments. This inference-time calibration mechanism addresses the gap in cross-agent dynamics of MAD.

Key facts

  • Paper published on arXiv with ID 2605.09278
  • Title: EquiMem: Calibrating Shared Memory in Multi-Agent Debate via Game-Theoretic Equilibrium
  • Multi-agent debate systems use shared memory for long-horizon reasoning
  • A single corrupted memory entry can contaminate reasoning
  • Existing safeguards use heuristics or LLM-based validation
  • LLM-based validation shares the same failure modes as the original problem
  • EquiMem formulates memory updating as a zero-trust memory game
  • No agent is assumed honest in the game formulation
  • Game equilibrium serves as indicator of optimal memory trust
  • EquiMem uses agents' retrieval queries and traversal paths as evidence
  • No LLM judgment is solicited in EquiMem
  • EquiMem is an inference-time calibration mechanism

Entities

Institutions

  • arXiv

Sources