ARTFEED — Contemporary Art Intelligence

DIPS: Using LLMs to Generate Pareto Fronts for Bi-Objective Optimization

ai-technology · 2026-05-13

A novel framework known as DIPS enhances large language models to produce viable Pareto fronts for bi-objective convex optimization under constraints. Unlike traditional methods such as iterative scalarization or evolutionary search, DIPS utilizes a textual description of the problem to generate a sequenced array of continuous decision vectors that closely resemble the Pareto front. To connect continuous optimization with autoregressive language modeling, DIPS employs a compact discretization method called Numerically Grounded Token Initialization for new numerical tokens, alongside a Three-Phase Curriculum Optimization that systematically improves structural validity, feasibility, and the quality of the Pareto front. The effectiveness of this method is assessed across five categories of optimization challenges, facilitating rapid generation without requiring solver calls for each instance.

Key facts

  • DIPS is an end-to-end framework for constrained bi-objective convex optimization.
  • It fine-tunes large language models as amortized Pareto-front generators.
  • Input is a textual problem description; output is an ordered set of feasible decision vectors.
  • Uses compact discretization, Numerically Grounded Token Initialization, and Three-Phase Curriculum Optimization.
  • Evaluated across five families of optimization problems.
  • Eliminates need for repeated optimization per instance.
  • Published on arXiv with ID 2605.12106.
  • The approach combines LLMs with continuous optimization.

Entities

Institutions

  • arXiv

Sources