ARTFEED — Contemporary Art Intelligence

Neurosymbolic AI Framework for Interpretable Skeleton-Based Action Recognition

ai-technology · 2026-05-11

Researchers have created a new framework that uses neurosymbolic techniques for recognizing human actions based on skeleton data. This approach shifts the focus to logical reasoning driven by concepts related to motion. It effectively links representation learning with symbolic reasoning by embedding first-order logic in learnable motion concepts. A skeleton encoder captures underlying motion patterns, which a concept decoder then translates into understandable logical predicates, focusing on different aspects of movement. By using differentiable first-order logic layers, the model produces clear logical rules that explain actions. This work, which improves the clarity of human action recognition models, can be found on arXiv under the identifier 2605.07140v1.

Key facts

  • Framework reframes skeleton-based HAR as concept-driven first-order logical reasoning
  • Uses spatio-temporal skeleton encoder for latent motion representations
  • Spatio-temporal concept decoder separates pose-centric and dynamics-centric abstractions
  • Differentiable first-order logic layers compose concept predicates
  • Model learns human-readable logical rules for action semantics
  • Published on arXiv with ID 2605.07140v1
  • Addresses interpretability gap in existing HAR models
  • Bridges representation learning and symbolic inference

Entities

Institutions

  • arXiv

Sources