Skip to yearly menu bar Skip to main content


Poster

Constraint-Conditioned Actor-Critic for Offline Safe Reinforcement Learning

Zijian Guo · Weichao Zhou · Shengao Wang · Wenchao Li

Hall 3 + Hall 2B #381
[ ] [ Project Page ]
Thu 24 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract: Offline safe reinforcement learning (OSRL) aims to learn policies with high rewards while satisfying safety constraints solely from data collected offline. However, the learned policies often struggle to handle states and actions that are not present or out-of-distribution (OOD) from the offline dataset, which can result in violation of the safety constraints or overly conservative behaviors during their online deployment. Moreover, many existing methods are unable to learn policies that can adapt to varying constraint thresholds. To address these challenges, we propose constraint-conditioned actor-critic (CCAC), a novel OSRL method that models the relationship between state-action distributions and safety constraints, and leverages this relationship to regularize critics and policy learning. CCAC learns policies that can effectively handle OOD data and adapt to varying constraint thresholds. Empirical evaluations on the DSRLDSRL benchmarks show that CCAC significantly outperforms existing methods for learning adaptive, safe, and high-reward policies.

Live content is unavailable. Log in and register to view live content