CauSim: A Framework to Scale Causal Reasoning for LLMs
CauSim, a novel framework, seeks to enhance causal reasoning in large language models (LLMs) by reinterpreting it as a scalable supervised learning task rather than a problem with limited labels. This method develops progressively intricate executable structural causal models (SCMs), which LLMs construct step by step, allowing them to adapt to globally complex systems while ensuring verifiable responses to causal inquiries. CauSim functions across various representations by converting non-executable causal knowledge into code, aiding in data augmentation, and translating executable SCMs into natural language for better supervision in challenging representations. The research can be found on arXiv with the identifier 2605.09079.
Key facts
- CauSim is a framework for scaling causal reasoning in LLMs.
- It constructs executable structural causal models (SCMs) incrementally built by LLMs.
- The framework turns causal reasoning from a scarce-label problem into a scalable supervised one.
- CauSim formalizes non-executable causal knowledge into code for data augmentation.
- It translates executable SCMs into natural language for supervision.
- The paper is published on arXiv with ID 2605.09079.
- LLMs currently struggle with causal reasoning despite excelling in math and coding.
- Causal systems are complex and often expressed in non-executable forms.
Entities
Institutions
- arXiv