AI Research Challenges 'Wisdom of the Crowd' in Multi-Agent Systems
A recent preprint on arXiv (2604.27274) disputes the belief that collaboration among AI agents is enhanced by the 'Wisdom of the Crowd.' The authors introduce the concept of the 'Consensus Paradox,' indicating that swarms of agents tend to favor internal consensus over external accuracy. Their research, which includes 36 experiments involving 12,804 trajectories across three benchmarks (GAIA, Multi-Challenge, SWE-bench), demonstrates the 'Inverse-Wisdom Law': in swarms dominated by kinship, the inclusion of logical agents actually stabilizes incorrect trajectories. Furthermore, the introduction of more logical audits results in 'Logic Saturation,' where internal entropy reaches zero and factual errors become absolute. The study assesses three state-of-the-art models: Gemini 3.1 Pro, Claude Sonnet 4.6, and GP.
Key facts
- arXiv:2604.27274v1 challenges the 'Wisdom of the Crowd' assumption in multi-agent systems.
- The Consensus Paradox describes agentic swarms prioritizing internal agreement over truth.
- 36 experiments with 12,804 trajectories were conducted across GAIA, Multi-Challenge, and SWE-bench.
- The Inverse-Wisdom Law states that adding logical agents to kinship-dominant swarms increases erroneous trajectory stability.
- Logic Saturation occurs when internal entropy reaches zero but factual error reaches unity.
- Three SOTA models were evaluated: Gemini 3.1 Pro, Claude Sonnet 4.6, and GP.
Entities
Institutions
- arXiv