ARTFEED — Contemporary Art Intelligence

Budget Control for LLM Search Agents via Value-of-Information

ai-technology · 2026-05-09

A recent preprint on arXiv (2605.05701) presents a novel technique for managing inference-time budgets in search agents utilizing LLMs. This method tackles the issue of simultaneous constraints on both tool usage and token generation. The authors conceptualize multi-hop question answering as a two-phase budget management challenge. During the search phase, a controller evaluates each viable action by assigning a Value-of-Information (VOI) score, which estimates the additional task value per budget unit based on the current state and remaining dual budget. This scoring system informs choices regarding retrieval, decomposition, and answer commitment. Following the search, a selective evidence-based finalizer assesses the trajectory answer against a refined candidate. The objective is to enhance answer quality under strict limitations without the need for more advanced models.

Key facts

  • arXiv:2605.05701
  • LLM search agents face dual budgets: tool calls and generated tokens
  • Two-stage inference-time budget control proposed
  • Controller uses Value-of-Information (VOI) scores
  • VOI estimates marginal task value per unit budget
  • Actions include retrieval, decomposition, and answer commitment
  • Selective evidence-grounded finalizer compares trajectory answer with refined candidate
  • Focus on multi-hop question answering

Entities

Institutions

  • arXiv

Sources