ARTFEED — Contemporary Art Intelligence

PDR-ANPG Algorithm Achieves Last-Iterate Convergence in Constrained MDPs

other · 2026-05-04

A novel algorithm named Primal-Dual based Regularized Accelerated Natural Policy Gradient (PDR-ANPG) has been introduced for the learning of Constrained Markov Decision Processes (CMDPs) utilizing general parameterized policies. This algorithm employs both entropy and quadratic regularizers to ensure convergence at the last iterate. For parameterized policy classes that exhibit a transferred compatibility approximation error, ε_bias, PDR-ANPG achieves an ε optimality gap and ε constraint violation with a sample complexity of Õ(ε⁻² min{ε⁻², ε_bias⁻¹/³}). In scenarios where the class is incomplete (ε_bias > 0), the complexity simplifies to Õ(ε⁻²) for ε < (ε_bias)^(1/6). For complete policies with ε_bias = 0, the algorithm secures last-iterate ε optimality gap and ε constraint violation with sample complexity Õ(ε⁻² min{ε⁻², 1}). This development enhances reinforcement learning in safety-critical contexts where constraints must be adhered to throughout the learning process.

Key facts

  • PDR-ANPG algorithm uses entropy and quadratic regularizers.
  • Achieves last-iterate convergence for general parameterized policies in CMDPs.
  • Sample complexity: Õ(ε⁻² min{ε⁻², ε_bias⁻¹/³}) for general case.
  • For incomplete policy classes (ε_bias > 0), complexity reduces to Õ(ε⁻²) when ε < (ε_bias)^(1/6).
  • For complete policies (ε_bias = 0), complexity is Õ(ε⁻² min{ε⁻², 1}).
  • The algorithm ensures both optimality gap and constraint violation are within ε.
  • Transferred compatibility approximation error ε_bias measures policy class incompleteness.
  • The paper is from arXiv:2408.11513v2.

Entities

Institutions

  • arXiv

Sources