AdaptEvolve: Adaptive LLM Selection for Efficient Evolutionary AI Agents
A new AI framework called AdaptEvolve aims to improve the efficiency of evolutionary agentic systems by dynamically selecting large language models (LLMs) during inference. The approach addresses the trade-off between computational cost and reasoning capability by using intrinsic generation confidence to estimate real-time solvability. Unlike existing model cascades that rely on static heuristics or external controllers, AdaptEvolve explicitly accounts for model uncertainty. Empirical results demonstrate that confidence-driven selection achieves a favorable Pareto frontier, reducing total inference cost while maintaining performance. The work is published on arXiv (2602.11931v2) and represents a step toward more adaptive AI systems.
Key facts
- AdaptEvolve is a framework for adaptive LLM selection in evolutionary agentic systems.
- It uses intrinsic generation confidence to estimate real-time solvability.
- The approach reduces total inference cost while maintaining reasoning capability.
- Existing routing strategies rely on static heuristics or external controllers.
- Empirical results show a favorable Pareto frontier for confidence-driven selection.
- The paper is available on arXiv with ID 2602.11931v2.
- The work addresses the trade-off between computational efficiency and reasoning capability.
- AdaptEvolve operates within an evolutionary sequential refinement framework.
Entities
Institutions
- arXiv