LLMs as Strategic Agents in Security Dilemma Games
A new paper on arXiv investigates whether large language models (LLMs) can serve as experimental subjects in repeated security dilemma games, a classic model from international relations theory. The study extends the baseline game along three dimensions: multipolarity, finite time horizons, and communication availability. Across multiple models, results show systematic patterns: multipolarity increases conflict likelihood, finite horizons induce universal unraveling consistent with backward induction, and communication reduces conflict via signaling and reciprocity. The design also accesses agents' private reasoning and public messages, linking choices to strategic logics like preemption and cooperation under uncertainty.
Key facts
- Paper on arXiv: 2605.03604
- LLMs used as experimental subjects in security dilemma games
- Three extensions: multipolarity, finite time horizons, communication
- Multipolarity increases conflict
- Finite horizons cause unraveling via backward induction
- Communication reduces conflict
- Access to private reasoning and public messages
- Links choices to strategic logics
Entities
Institutions
- arXiv