ARTFEED — Contemporary Art Intelligence

OptArgus: Multi-Agent System Detects LLM Optimization Hallucinations

ai-technology · 2026-05-13

A new multi-agent system, OptArgus, has been created by researchers to identify hallucinations in optimization modeling based on LLMs. This system tackles the problem where achieving a reference objective value does not ensure semantic accuracy; a numerical agreement may still distort the fundamental optimization semantics. The researchers approached this challenge as a detection of optimization-modeling hallucinations, emphasizing the need for structural consistency checks across problem descriptions, symbolic models, and solver implementations. They developed a detailed hallucination taxonomy for optimization modeling, addressing failures in objectives, variables, constraints, and implementations. OptArgus features a multi-agent detector with conductor routing, specialized auditors, and evidence consolidation. To test the system, a benchmark suite was created, comprising 484 clean artifacts and 1266 fabricated examples. This research is available on arXiv, identified by 2605.11738.

Key facts

  • OptArgus is a multi-agent system for detecting hallucinations in LLM-based optimization modeling.
  • Matching reference objective value is not a reliable test of correctness.
  • The problem is formulated as optimization-modeling hallucination detection.
  • A fine-grained hallucination taxonomy was created for optimization modeling.
  • The taxonomy spans objective, variable, constraint, and implementation failures.
  • OptArgus uses conductor routing, specialist auditors, and evidence consolidation.
  • A benchmark suite with 484 clean artifacts and 1266 contrived examples was introduced.
  • The research is published on arXiv with identifier 2605.11738.

Entities

Institutions

  • arXiv

Sources