ARTFEED — Contemporary Art Intelligence

LLMs Struggle to Use In-Context Representations for Downstream Tasks

ai-technology · 2026-05-04

A new study investigates whether large language models can deploy representations learned in-context for simple downstream tasks. Building on Park et al. (2024), which showed LLMs can induce rich representations from context, researchers tested open-weights models on next-token prediction and a novel adaptive world modeling task. Results indicate significant limitations in flexibly using these representations, highlighting a gap toward adaptive AI.

Key facts

  • Study builds on Park et al. (2024) demonstrating in-context representation learning in LLMs
  • Assesses open-weights LLMs on next-token prediction using in-context representations
  • Introduces a novel task called adaptive world modeling
  • Findings show LLMs struggle to deploy learned representations for downstream tasks
  • Research appears on arXiv with identifier 2602.04212

Entities

Institutions

  • arXiv

Sources