Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Guiding Safe Exploration with Weakest Preconditions

Greg Anderson · Swarat Chaudhuri · Isil Dillig

MH1-2-3-4 #148

Keywords: [ Safe exploration ] [ safe learning ] [ reinforcement learning ]


Abstract:

In reinforcement learning for safety-critical settings, it is often desirable for the agent to obey safety constraints at all points in time, including during training. We present a novel neurosymbolic approach called SPICE to solve this safe exploration problem. SPICE uses an online shielding layer based on symbolic weakest preconditions to achieve a more precise safety analysis than existing tools without unduly impacting the training process. We evaluate the approach on a suite of continuous control benchmarks and show that it can achieve comparable performance to existing safe learning techniques while incurring fewer safety violations. Additionally, we present theoretical results showing that SPICE converges to the optimal safe policy under reasonable assumptions.

Chat is not available.