ARTFEED — Contemporary Art Intelligence

SAGE Framework Boosts LLM Optimization Modeling Accuracy

ai-technology · 2026-05-06

A group of researchers has unveiled SAGE, a new framework that significantly improves how large language models develop precise and efficient optimization programs. By clarifying the modeling approach during data creation and after training, SAGE leverages a solver-verified multi-strategy dataset and incorporates supervised fine-tuning with Segment-Weighted GRPO to train a student model. The overall reward system considers format adherence, accuracy, and solver performance. When tested across eight benchmarks, SAGE increased the average pass@1 rate from 72.7 to 80.3, outperforming the leading open-source model. It also detects a wider range of correct formulations and enhances component-level diversity at pass@16 by 19-29%. On a larger scale, SAGE produces constraint systems with 14.2% fewer constraints.

Key facts

  • SAGE is a strategy-aware framework for optimization modeling with LLMs.
  • It uses a solver-verified multi-strategy dataset.
  • Training involves supervised fine-tuning and Segment-Weighted GRPO.
  • Composite reward covers format compliance, correctness, and solver efficiency.
  • Tested on eight benchmarks spanning synthetic and real-world settings.
  • Improves average pass@1 from 72.7 to 80.3 over open-source baseline.
  • Increases component-level diversity at pass@16 by 19-29%.
  • Reduces constraint count by 14.2% at largest scale.

Entities

Institutions

  • arXiv

Sources