ARTFEED — Contemporary Art Intelligence

ADEMA: New Architecture for Long-Horizon LLM Knowledge Synthesis

ai-technology · 2026-04-30

ADEMA is an orchestration framework for knowledge states aimed at mitigating failures in long-horizon LLM tasks, where knowledge states can shift, commitments are often unspoken, and interruptions disrupt evidence chains. This architecture integrates explicit epistemic tracking, dual-evaluator governance with varied approaches, adaptive switching of task modes, resource allocation influenced by reputation, persistence that allows for checkpoint resumption, memory condensation at the segment level, assembly prioritizing artifacts, and validity verification with secure fallback options. Evidence is sourced from various materials: a showcase package with four scenarios, a consistent 60-run mechanism matrix, targeted micro-ablation and artifact-chain enhancements, and a corrected protocol-level benchmark featuring code-based evaluation. The paper can be found on arXiv with ID 2604.25849.

Key facts

  • ADEMA is a knowledge-state orchestration architecture for long-horizon knowledge synthesis.
  • It addresses failures from knowledge state drift, implicit commitments, and interrupted evidence chains.
  • Components include epistemic bookkeeping, dual-evaluator governance, adaptive task-mode switching, and reputation-shaped resource allocation.
  • Features checkpoint-resumable persistence, segment-level memory condensation, and artifact-first assembly.
  • Includes final-validity checking with safe fallback.
  • Evidence comes from a four-scenario showcase, a 60-run mechanism matrix, micro-ablation and artifact-chain supplements, and a repaired benchmark.
  • The paper is on arXiv with ID 2604.25849.
  • Code-oriented evaluation is used in the benchmark.

Entities

Institutions

  • arXiv

Sources