ARTFEED — Contemporary Art Intelligence

LLMs Enhanced with External Ontological Memory for Hybrid Intelligent Systems

ai-technology · 2026-04-24

A new study posted on arXiv (2604.20795) presents a cutting-edge hybrid framework aimed at improving intelligent systems. This approach boosts large language models (LLMs) by integrating an external layer of ontological memory. Instead of relying solely on traditional parametric knowledge and vector-based retrieval (RAG), the framework creates and maintains a structured knowledge graph using RDF/OWL formats. This enables reasoning that is reliable, traceable, and semantically rich. A notable feature is the automated generation of ontologies from various data sources like documents and dialogue logs. The system includes processes for recognizing entities, extracting relationships, normalizing data, and generating triples, followed by validation through SHACL and OWL rules, with ongoing updates to the graph. During inference, LLMs combine vector retrieval with graph reasoning and external tools, alongside experimental findings on planning tasks.

Key facts

  • Paper proposes hybrid architecture for intelligent systems using LLMs with external ontological memory.
  • Uses RDF/OWL representations for structured knowledge graph.
  • Automated pipeline for ontology construction from heterogeneous data sources.
  • Performs entity recognition, relation extraction, normalization, and triple generation.
  • Validation using SHACL and OWL constraints.
  • Continuous graph updates.
  • Inference combines vector-based retrieval with graph-based reasoning and external tool interaction.
  • Experimental observations on planning tasks.

Entities

Institutions

  • arXiv

Sources