LLMs Prioritize Sensibility Over Compliance in Reasoning Conflicts
A new study from arXiv (2604.27251) investigates whether fundamental reasoning patterns—induction, deduction, and abduction—can be decoupled from specific problem instances in large language models (LLMs). The researchers introduce reasoning conflicts, where logical schemata deviate from those expected for a task, creating tension between parametric and contextual information. Their evaluation shows that LLMs consistently favor sensibility over compliance, adhering to task-appropriate reasoning patterns even when instructed otherwise. Task accuracy is not strictly determined by sensibility, indicating nuanced controllability challenges.
Key facts
- arXiv paper 2604.27251 investigates reasoning controllability in LLMs.
- Focuses on induction, deduction, and abduction patterns.
- Introduces reasoning conflicts as a key concept.
- LLMs prioritize sensibility over compliance.
- Task accuracy not strictly tied to sensibility.
- First systematic investigation of this problem.
- Uses Chain-of-Thought (CoT) practices as context.
- Published on arXiv with cross type announcement.
Entities
Institutions
- arXiv