Graph Tsetlin Machine Enables Deep Logical Learning on Graphs
Researchers have unveiled the Graph Tsetlin Machine (GraphTM), an advancement of the Tsetlin Machine designed to derive interpretable deep clauses from graph-structured datasets. In contrast to traditional flat, fixed-length input models, GraphTM processes sequences, grids, relationships, and multimodal data through message passing, constructing nested deep clauses that identify sub-graph patterns with significantly fewer clauses. This enhancement boosts both interpretability and data efficiency. In image classification tasks on CIFAR-10, GraphTM surpasses a convolutional TM by 3.86 percentage points in accuracy while maintaining interpretability. Additionally, GraphTM excels in tracking action coreference in more complex scenarios compared to other reinforcement learning approaches. The findings are published in arXiv:2507.14874.
Key facts
- GraphTM extends Tsetlin Machine to graph-structured input.
- Uses message passing to build nested deep clauses.
- Achieves 3.86% higher accuracy on CIFAR-10 than convolutional TM.
- Outperforms reinforcement learning methods in action coreference tracking.
- Supports sequences, grids, relations, and multimodality.
- Reduces clause count exponentially for sub-graph patterns.
- Preserves interpretability while improving accuracy.
- Published on arXiv with ID 2507.14874.
Entities
Institutions
- arXiv