Federation over Text Framework Enables Multi-Agent Reasoning Through Shared Insight Libraries
A framework resembling federated learning, known as Federation over Text (FoT), allows various agents engaged in distinct tasks to collaboratively create a shared repository of metacognitive insights by iteratively federating their local reasoning. Unlike traditional distributed training that relies on gradient federation, FoT functions at a semantic level without the need for gradient optimization or supervision. Each agent independently enhances its specific task through local reasoning and then transmits these reasoning traces to a central server. This server compiles and refines the traces into a library of insights applicable across tasks and domains, benefiting both current and future agents. Experiments indicate that FoT enhances reasoning abilities. The framework overcomes the challenge faced by LLM-powered agents, which often begin reasoning anew for each problem and lack mechanisms for skill transfer. This methodology enables agents to utilize shared knowledge instead of starting from scratch. The findings were published in arXiv preprint 2604.16778v1, categorized as cross.
Key facts
- Federation over Text (FoT) is a federated learning-like framework for multi-agent reasoning
- FoT enables agents to collectively generate shared libraries of metacognitive insights
- The framework operates at semantic level without gradient optimization or supervision
- Agents perform local thinking and self-improvement on specific tasks independently
- Reasoning traces are shared with a central server for aggregation and distillation
- The server creates cross-task and cross-domain insight libraries for agent use
- Experiments show FoT improves reasoning capabilities
- Addresses LLM-powered agents' tendency to reason from scratch without skill transfer
Entities
—