ARTFEED — Contemporary Art Intelligence

LLMs Develop Universal Number Representations, Reducing Arithmetic Errors

ai-technology · 2026-04-24

A new study quantifies that large language models (LLMs) converge to strikingly systematic sinusoidal representations for numbers, which are almost perfectly universal across different model families. These number embeddings are interchangeable in many experimental setups. The research shows that properly accounting for this universality is crucial for assessing how accurately LLMs encode numeric and ordinal information. Mechanistically enhancing sinusoidality can reduce arithmetic errors in LLMs.

Key facts

  • LLMs converge to accurate sinusoidal input embeddings for numbers.
  • Number representations are almost perfectly universal across different LLM families.
  • Number embeddings are broadly interchangeable in many experimental setups.
  • Properly factoring in universality is crucial for assessing numeric encoding accuracy.
  • Enhancing sinusoidality can reduce arithmetic errors in LLMs.

Entities

Institutions

  • arXiv

Sources