ARTFEED — Contemporary Art Intelligence

Memory Curse: Expanded Context Erodes LLM Cooperation in Social Dilemmas

ai-technology · 2026-05-11

A recent study published on arXiv (2605.08060) indicates that increasing context windows in large language models (LLMs) may unexpectedly hinder cooperation in multi-agent social dilemmas. The research, which examined 7 LLMs across 4 games over 500 rounds, found that cooperation diminished in 18 out of 28 model-game combinations, a situation referred to as the 'memory curse.' Analyzing 378,000 reasoning traces revealed that this decline stems from weakened forward-looking intent rather than heightened paranoia. Implementing targeted fine-tuning with a LoRA adapter on forward-looking traces helped alleviate this decline and allowed for zero-shot transfer to different games. Additionally, memory sanitization tests, which substituted actual history with artificial cooperative records while maintaining prompt length, reinstated cooperation, demonstrating that memory content—not its length—triggers this effect. These results challenge the belief that expanding context is merely an enhancement.

Key facts

  • Study conducted across 7 LLMs and 4 games over 500 rounds
  • Cooperation degraded in 18 of 28 model-game settings
  • Termed 'memory curse'
  • Analysis of 378,000 reasoning traces
  • Eroded forward-looking intent identified as mechanism
  • LoRA adapter on forward-looking traces mitigated decay
  • Zero-shot transfer to distinct games achieved
  • Memory sanitization restored cooperation via synthetic records

Entities

Institutions

  • arXiv

Sources