ARTFEED — Contemporary Art Intelligence

HAGE: Weighted Multi-Relational Memory for LLM Agents

other · 2026-05-12

A recent publication on arXiv (2605.09942) presents HAGE, a framework designed for agentic large language model (LLM) systems that utilizes a weighted multi-relational memory approach. In contrast to conventional retrieval techniques like flat vector searches or static binary graphs, HAGE structures memory as relation-specific graph representations over common nodes, where each edge is associated with a trainable feature vector that captures various relational signals. When a query is made, a classifier based on LLM determines the relational intent, while a routing network adjusts the dimensions of edge embeddings. The traversal scores integrate semantic similarity with learned relational characteristics, facilitating query-conditioned sequential navigation through a cohesive relational memory graph.

Key facts

  • Paper arXiv:2605.09942 proposes HAGE framework
  • HAGE stands for Harnessing Agentic Memory via RL-Driven Weighted Graph Evolution
  • Memory is organized as relation-specific graph views over shared nodes
  • Each edge has a trainable relation feature vector
  • An LLM-based classifier identifies relational intent per query
  • A routing network dynamically modulates edge embedding dimensions
  • Traversal scores combine semantic similarity with learned features
  • Framework enables query-conditioned sequential traversal

Entities

Institutions

  • arXiv

Sources