LLMs Fail at Dynamic Grounding in Multi-Agent Negotiation
A recent study published on arXiv (2605.01750) presents an iterated multi-turn negotiation game in which two LLM agents manage shared resources for private projects, aiming for verifiable jointly optimal results. Although each agent can independently find Pareto-optimal allocations, pairs of agents frequently fail to achieve these outcomes in both open- and closed-source models. The research outlines four primary failure modes: the absence of shared interaction history impairs coordination; accumulated context may hinder performance; agents face difficulties with dynamic grounding repair; and persistent breakdowns in mutual belief occur across negotiation turns. The authors contend that existing multi-agent LLM benchmarks emphasize static, one-off tasks, neglecting the critical need for effective grounding repair in ongoing interactions. This study underscores a significant shortcoming in LLM communication skills, particularly in dynamic grounding, where meaning is collaboratively negotiated.
Key facts
- Study introduces iterated multi-turn negotiation game for LLM agents
- Agents allocate shared resources toward private projects
- Individual agents can identify Pareto-optimal allocations
- Agent dyads consistently fail to reach optimal outcomes
- Four failure modes identified including coordination and context liabilities
- Current benchmarks overlook dynamic grounding repair
- Open- and closed-source models both exhibit failures
- Research from arXiv paper 2605.01750
Entities
Institutions
- arXiv