ARTFEED — Contemporary Art Intelligence

Debate on Representation in Large Language Models

ai-technology · 2026-05-04

A new paper on arXiv (2501.00885) addresses a fundamental question about Large Language Models (LLMs): whether their behavior is driven by representation-based information processing, similar to biological cognition, or solely by memorization and stochastic table look-up. The authors argue that resolving this algorithmic question is crucial for advancing debates between LLM optimists and pessimists, as it carries implications for higher-level issues such as whether these systems possess beliefs or intentions. The paper aims to break the current stalemate by focusing on this core theoretical issue.

Key facts

  • Paper published on arXiv with ID 2501.00885
  • Addresses whether LLM behavior involves representation-based processing or just memorization
  • Argues that the algorithmic nature of LLMs is a key unresolved question
  • Implications for whether LLMs have beliefs or intentions
  • Seeks to overcome stalemate between optimists and pessimists

Entities

Institutions

  • arXiv

Sources