ARTFEED — Contemporary Art Intelligence

RoPE Rotation Space as Learnable Dimension in Attention Mechanisms

ai-technology · 2026-04-29

A new arXiv paper (2604.24717) proposes treating the rotation manifold of Rotary Positional Embeddings (RoPE) as a learnable, signal-conditioned space, analogous to introducing an imaginary axis in complex numbers. The authors argue that current Transformer architectures use RoPE as a fixed, hand-crafted structure with discrete ordinal indices, overlooking a second dimension of expressivity in attention. By making the rotation manifold learnable, the approach aims to unlock an orthogonal degree of freedom, where token embeddings encode semantic (real) components and rotations encode temporal or relational information. The paper suggests this could open new doors for attention-based architectures.

Key facts

  • Paper arXiv:2604.24717 proposes learnable rotation manifold for RoPE.
  • Current RoPE is fixed and hand-crafted with discrete indices.
  • Analogy to complex numbers: imaginary axis as orthogonal dimension.
  • Token embeddings encode semantic (real) component.
  • Rotation manifold treated as signal-conditioned space.
  • Aims to unlock orthogonal degree of freedom in attention.
  • Published as arXiv preprint on April 26, 2025.
  • Authors argue rotation space is overlooked dimension of expressivity.

Entities

Institutions

  • arXiv

Sources