ARTFEED — Contemporary Art Intelligence

Recursive LLM Loops: Dose-Response Study of Persistent Redirection

ai-technology · 2026-05-06

A recent study published on arXiv (2605.02236) explores the amount of injected text required to alter established recursive language-model loops. The researchers decoupled the model from the context-update rule, examining append, replace, and dialog updates over 30-step loops. A significant finding reveals that in append-mode loops, persistent redirection is influenced by memory policy. With a 12,000-character tail clip, destination-coherent persistence stabilizes around 16%, while retained source-basin escape approaches 36% at a dose of 400, neither surpassing 50%. Using a full-history protocol, retained source-basin escape exceeds 50% near 400 tokens and reaches 75-80% by 1,500 tokens, with destination-coherent persistence first hitting 0.50 around 1,500 tokens (Wilson 95% CI [0.41, 0.61]).

Key facts

  • Study on arXiv:2605.02236
  • Recursive LLM loops tested with append, replace, and dialog updates
  • 30-step recursive loops used
  • Persistent redirection in append-mode is memory-policy-conditioned
  • Under 12,000-character tail clip: destination-coherent persistence plateaus near 16% at dose 400
  • Under tail clip: retained source-basin escape near 36% at dose 400
  • Under full-history: retained source-basin escape crosses 50% near 400 tokens
  • Under full-history: destination-coherent persistence reaches 0.50 near 1,500 tokens

Entities

Institutions

  • arXiv

Sources