MAGE: Multi-Agent Graph-Guided Evolution for Language Models
A new framework called MAGE, which stands for Multi-Agent Graph-guided Evolution, has been introduced to create language-model agents that can evolve on their own. This system incorporates a co-evolutionary knowledge graph with four subgraphs, including one focused on experiences. This experience subgraph stores both corrections from teachers and the learner's previous successful reasoning paths, giving specific guidance for tasks. During its evolution, the graph is updated alongside a task-level search bandit and a skill-level routing bandit, all benefiting from a common reward stream, while the learner's main structure stays unchanged. A structural analysis in the paper shows how this approach maintains a static weak backbone during inference, addressing flaws seen in existing models. The research can be found on arXiv under the identifier 2605.10064.
Key facts
- MAGE stands for Multi-Agent Graph-guided Evolution
- Framework externalizes self-knowledge into a co-evolutionary knowledge graph with four subgraphs
- Experience subgraph stores teacher-written failure corrections and learner's correct reasoning traces
- Retrieved knowledge serves as task-conditioned guidance for a frozen execution model
- Graph, task-level search bandit, and skill-level routing bandit updated from same reward stream
- Learner's backbone remains unchanged during evolution
- Addresses limitations of natural-language feedback, flat episodic memory, and implicit reinforcement signals
- Published on arXiv with identifier 2605.10064
Entities
Institutions
- arXiv