ARTFEED — Contemporary Art Intelligence

Workspace Optimization: Training Agents Without Weight Updates

ai-technology · 2026-05-12

A new paper on arXiv proposes workspace optimization as a method for training language model agents without modifying their weights. The approach focuses on evolving the agent's external workspace—the structured substrate it reads, writes, and tests—through interaction. The authors introduce DreamTeam, a multi-agent system for ARC-AGI-3 that uses roles to build world models, plan, hypothesize, and route failures. The method mirrors weight-space training with artifacts as parameters, evidence as data, counterexamples as losses, and textual feedback as gradients. The paper targets hard multi-turn environments where frontier models have strong priors but cannot solve tasks in a single shot.

Key facts

  • arXiv paper 2605.09650 proposes workspace optimization
  • Workspace optimization trains agents without weight updates
  • Agents evolve external workspace through interaction
  • DreamTeam is a multi-agent harness for ARC-AGI-3
  • Roles include building world models, planning, hypothesizing, probing, strategizing, routing failures
  • Method mirrors weight-space training with artifacts, evidence, counterexamples, textual feedback
  • Targets hard multi-turn environments with strong priors but single-shot failure
  • Tested on current 25-game ARC-AGI-3 public set

Entities

Institutions

  • arXiv

Sources