Lossless Context Management Boosts LLM Long-Context Performance
A novel architecture known as Lossless Context Management (LCM) enhances the performance of large language models in long-context tasks, surpassing Claude Code. The coding agent, Volt, which utilizes LCM, outperformed Claude Code on the OOLONG benchmark across all context lengths, ranging from 32K to 1M tokens, evaluated with Opus 4.6. LCM builds upon the recursive paradigm of Recursive Language Models (RLMs) by breaking down symbolic recursion into two deterministic mechanisms managed by the engine: recursive context compression, which creates a hierarchical summary DAG with lossless pointers to original messages, and recursive task partitioning. Findings indicate that manipulating recursive context can exceed the capabilities of traditional LLMs and advanced coding agents with native file-system access. The study is available on arXiv under ID 2605.04050.
Key facts
- LCM is a deterministic architecture for LLM memory
- LCM-augmented agent Volt outperforms Claude Code on OOLONG benchmark
- Volt uses Opus 4.6 for evaluation
- LCM achieves higher scores at every context length between 32K and 1M tokens
- LCM extends the recursive paradigm of Recursive Language Models (RLMs)
- LCM decomposes recursion into recursive context compression and recursive task partitioning
- Recursive context compression uses a hierarchical summary DAG with lossless pointers
- Paper published on arXiv with ID 2605.04050
Entities
Institutions
- arXiv
- Claude Code
- Recursive Language Models (RLMs)