Automated AI Framework for Theory Adjudication in Cognitive Science
Researchers have developed an automated adversarial collaboration framework that uses large language models, program synthesis, and information-theoretic experimental design to adjudicate among competing theories in cognitive science. The system operates in a closed loop, discovering candidate models and experiments during the process. In a simulation study involving three classic categorization theories, the framework successfully recovered the ground-truth theory across various noise settings, though with weaker reliability in the hardest conditions. This proof-of-concept demonstrates the potential for closed-loop, in-silico theory adjudication, offering a new method for integrating evidence across tasks and realizations. The work is published on arXiv under computer science and artificial intelligence.
Key facts
- The framework combines LLM-based theory agents, program synthesis, and information-theoretic experimental design.
- It operates in a closed loop to discover models and experiments automatically.
- A simulation study tested three classic categorization theories.
- The framework recovered ground-truth theory across noise settings.
- Reliability was weaker in the hardest noise settings.
- The work provides a proof of concept for in-silico theory adjudication.
- The paper is available on arXiv.
- The approach aims to advance theory building in cognitive science.
Entities
Institutions
- arXiv