Active Learning Optimizes Communication in LLM Multi-Agent Systems
A new study from arXiv (2605.05703) proposes an active learning framework to optimize communication structures in large language model-based multi-agent systems (LLM-MAS). Existing methods rely on random training tasks, which vary in difficulty and domain, leading to unstable optimization under limited budgets. The authors introduce an ensemble-based information-theoretic task selection framework that estimates task informativeness by measuring how a candidate task changes the distribution over graph parameters. They use ensemble Kalman inversion as an efficient, derivative-free approximation of the Bayesian update. This approach aims to improve downstream performance and reduce token usage by identifying the most valuable tasks for updating communication structures.
Key facts
- arXiv paper 2605.05703 proposes active learning for communication structure optimization in LLM-MAS.
- Existing methods use randomly sampled training tasks, causing unstable optimization.
- The framework uses ensemble-based information-theoretic task selection.
- Task informativeness is estimated by change in distribution over graph parameters.
- Ensemble Kalman inversion provides efficient, derivative-free Bayesian approximation.
- Goal is to improve downstream performance and reduce token usage.
- Method addresses limited training budgets and task variability.
- Framework actively identifies most valuable tasks for structure updates.
Entities
Institutions
- arXiv