ARTFEED — Contemporary Art Intelligence

Neural Theorizer Learns World Theories from Raw Observations

ai-technology · 2026-05-07

A novel machine learning framework called Learning-to-Theorize has been unveiled by researchers, allowing AI to derive explicit explanatory theories from unprocessed, non-textual data. This innovative approach draws from developmental cognitive science, which posits that human comprehension develops through the formation of internal theories prior to mastering language. The framework is realized through the Neural Theorizer (NEO), a probabilistic neural architecture that generates latent programs as a learned Language of Thought, executing them through a common transition model. Within NEO, theories are depicted as executable, compositional programs, with learned components that can be systematically reassembled to account for new phenomena. This research was documented on arXiv under ID 2605.03413.

Key facts

  • Learning-to-Theorize is a new learning paradigm for inferring explanatory theories from raw observations.
  • The paradigm is inspired by developmental cognitive science.
  • NEO (Neural Theorizer) is a probabilistic neural model that induces latent programs.
  • NEO uses a learned Language of Thought and a shared transition model.
  • Theories in NEO are executable, compositional programs with recombineable primitives.
  • The research was published on arXiv with ID 2605.03413.
  • The approach contrasts with contemporary world models that focus on accurate future prediction.
  • The work aims to operationalize understanding as theory-building rather than prediction.

Entities

Institutions

  • arXiv

Sources