ARTFEED — Contemporary Art Intelligence

Safe Reinforcement Learning via Control Barrier Functions with Model Uncertainty

other · 2026-04-29

A novel framework for safe reinforcement learning employs control-theoretic action perturbations to maintain safety in complex systems with uncertain dynamics. Initially, it develops a probabilistic control-affine dynamics model in an offline setting, subsequently creating control barrier functions (CBFs) that factor in model uncertainty to establish conservative safety constraints. These constraints are implemented via an online action correction mechanism, facilitating safe exploration while minimizing performance limitations. This method overcomes the shortcomings of current safe RL techniques that offer safety assurances only in expectation, as well as control-theoretic methods that depend on known dynamics or precise model estimation. Empirical tests validate the effectiveness of this framework.

Key facts

  • Proposes a safe RL framework using control-theoretic action perturbations
  • Learns a probabilistic control-affine dynamics model offline
  • Constructs control barrier functions (CBFs) incorporating model uncertainty
  • Enforces CBF constraints via online action correction
  • Enables safe exploration without overly restricting task performance
  • Addresses limitations of expectation-based safe RL methods
  • Does not require known dynamics or accurate model estimation
  • Empirical evaluations show effectiveness

Entities

Sources