ARTFEED — Contemporary Art Intelligence

Gyan: An Explainable Neuro-Symbolic Language Model

ai-technology · 2026-05-07

A new language model called Gyan, described in arXiv:2605.04759, claims to overcome limitations of transformer-based LLMs. Gyan uses a non-transformer architecture that decouples language modeling from knowledge acquisition. It achieves state-of-the-art performance on three widely cited datasets and superior performance on two proprietary datasets. The model draws on rhetorical structure theory, semantic role theory, and knowledge-based computational linguistics. Gyan is designed to be interpretable, avoid hallucination, and require less compute resources. The paper was announced as a cross-type submission on arXiv.

Key facts

  • Gyan is an explainable language model with a non-transformer architecture.
  • It decouples language model from knowledge acquisition and representation.
  • Achieves SOTA on 3 widely cited datasets and superior on 2 proprietary datasets.
  • Draws on rhetorical structure theory, semantic role theory, and knowledge-based computational linguistics.
  • Aims to be interpretable, avoid hallucination, and reduce compute requirements.
  • Paper announced on arXiv with ID 2605.04759v1 as cross-type.
  • Transformer-based LLMs are criticized for lacking compositional context and hallucinating.
  • Gyan's meaning representation is based on linguistic theories.

Entities

Institutions

  • arXiv

Sources