LLMs in Agent-Based Models: Capability vs. Explanation
A new arXiv paper (2605.12824) examines the use of large language models (LLMs) in generative agent-based models (ABMs) and social simulations. While LLMs can produce diverse high-level phenomena without explicit programming, the authors argue that capability and prediction differ from explanation. Drawing on philosophy of science and mechanisms literature, they contend that explanation requires showing how phenomena are produced by organized entities and activities. The paper integrates recent work on LLM-ABMs, aiming to help modelers characterize experiments and assess progress in capability or explanation.
Key facts
- arXiv paper 2605.12824
- Focuses on LLMs in agent-based models and social simulations
- Distinguishes capability, prediction, and explanation
- Draws on philosophy of science and mechanisms literature
- Explanation requires showing how phenomena are produced by organized entities and activities
- Integrates recent work on LLM-ABMs
- Aims to help modelers characterize experiments
- Assesses progress in capability or explanation
Entities
Institutions
- arXiv