Neuro-Symbolic AI Study Challenges Assumption on Compositional Reasoning
A new study from arXiv challenges the assumption that compositional reasoning emerges from symbol grounding in neuro-symbolic AI. The research introduces the Iterative Logic Tensor Network (iLTN), a differentiable architecture for multi-step deduction. Using a formal taxonomy of generalization, the study shows that models trained solely on grounding fail to generalize to novel entities, unseen relations, and complex rule compositions. The full iLTN, trained jointly on grounding and multi-step reasoning, overcomes these limitations. The findings highlight the non-complementarity of grounding and reasoning, urging a rethinking of neuro-symbolic system design.
Key facts
- Compositional generalization is a weakness of neural networks.
- The study challenges the assumption that compositional reasoning emerges from symbol grounding.
- The Iterative Logic Tensor Network (iLTN) is introduced for multi-step deduction.
- Models trained only on grounding fail to generalize to novel entities, unseen relations, and complex rule compositions.
- Full iLTN trained on both grounding and reasoning succeeds in generalization.
- The research provides a formal taxonomy of generalization for probing.
- The study is published on arXiv with ID 2604.26521.
- The work is the first systematic empirical analysis disentangling grounding and reasoning.
Entities
Institutions
- arXiv