ARTFEED — Contemporary Art Intelligence

LLMs encode social role granularity along dominant geometric axis

ai-technology · 2026-05-09

A recent study reveals that large language models (LLMs) represent the complexity of social roles—from individual to institutional—along a significant geometric axis. Researchers introduced a Granularity Axis based on the contrast between mean hidden states of macro- and micro-roles. In Qwen3-8B, this axis corresponds to the principal component (PC1) of the role representation space with a cosine similarity of 0.972, explaining 52.6% of the variance. The research team identified 75 social roles across five levels of granularity and gathered 91,200 responses conditioned on these roles through various prompts. The findings indicate that role projections increase steadily along the axis, underscoring granularity as the key organizing factor for prompted social roles in LLMs. The paper can be found on arXiv with identifier 2605.06196.

Key facts

  • Granularity Axis defined as difference between mean macro- and micro-role hidden states
  • In Qwen3-8B, axis aligns with PC1 at cosine 0.972
  • Axis accounts for 52.6% of variance in role representation space
  • 75 social roles constructed across five granularity levels
  • 91,200 role-conditioned responses collected
  • Role projections increase monotonically along the axis
  • Study shows granularity is dominant geometric axis for social roles in LLMs
  • Paper available on arXiv:2605.06196

Entities

Institutions

  • arXiv

Sources