Deep Reasoning: Structured Meta-Cognition for LLM Agents
A recent paper on arXiv presents Deep Reasoning, an innovative inference-time technique that creates task-specific frameworks for LLM agents utilizing structured meta-reasoning. This approach employs a formal language to depict meta-reasoning as executable breakdowns involving associative inference, formal computation, and recursive subproblem resolution. By encoding decomposition principles as in-context examples, it facilitates scaffold creation during testing. This method tackles the rigidity of existing LLM agents, which often rely on fixed reasoning strategies. Unlike humans, who fluidly switch between reasoning styles—such as planning, executing, and revising—LLM agents struggle to modify their reasoning frameworks. Deep Reasoning empowers agents to construct adaptable scaffolds for each task. The paper can be accessed on arXiv with ID 2605.11388.
Key facts
- Deep Reasoning is an inference-time approach for constructing task-specific scaffolds through structured meta-reasoning.
- It uses a formal language representing meta-reasoning as executable decompositions over associative inference, formal computation, and recursive subproblem solving.
- Decomposition principles are encoded as in-context examples that guide test-time scaffold construction.
- Current LLM agents lack flexibility because their scaffolds hard-code reasoning decisions in advance.
- Humans solve complex problems by flexibly shifting among reasoning modes: planning, executing, revising goals, resolving ambiguity, and applying formal procedures.
- The approach addresses the brittleness of LLM agents when tasks require adapting the structure of reasoning.
- The paper is published on arXiv with ID 2605.11388.
Entities
Institutions
- arXiv