ARTFEED — Contemporary Art Intelligence

H-Probes: Extracting Hierarchical Structures from LLM Latent Representations

ai-technology · 2026-05-06

A recent study presents H-probes, a collection of linear probes aimed at extracting hierarchical structures, such as depth and pairwise distance, from the latent representations of large language models (LLMs). Published on arXiv (2605.00847v1), the research shows that these probes effectively pinpoint subspaces rich in hierarchical information during synthetic tree traversal tasks. Ablation tests indicate that these subspaces are low-dimensional, play a crucial role in achieving high task performance, and can generalize across various domains. Furthermore, the researchers discovered similar, albeit weaker, hierarchical structures in real-world datasets, implying that LLMs encode hierarchical reasoning within their latent spaces. This study fills a gap in understanding how models geometrically represent the latent constructs essential for hierarchical reasoning, a key cognitive process.

Key facts

  • H-probes are linear probes that extract hierarchical structure from LLM latent representations.
  • The probes extract depth and pairwise distance.
  • Synthetic tree traversal tasks were used for evaluation.
  • Hierarchy-containing subspaces are low-dimensional.
  • These subspaces are causally important for task performance.
  • Generalization occurs within and out of domain.
  • Analogous hierarchical structures found in real-world data.
  • Paper published on arXiv with ID 2605.00847v1.

Entities

Institutions

  • arXiv

Sources