Skip to yearly menu bar Skip to main content


Poster

Robust Transfer of Safety-Constrained Reinforcement Learning Agents

Markel Zubia · Thiago Simão · Nils Jansen

Hall 3 + Hall 2B #633
[ ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Reinforcement learning (RL) often relies on trial and error, which may cause undesirable outcomes. As a result, standard RL is inappropriate for safety-critical applications. To address this issue, one may train a safe agent in a controlled environment (where safety violations are allowed) and then transfer it to the real world (where safety violations may have disastrous consequences). Prior work has made this transfer safe as long as the new environment preserves the safety-related dynamics. However, in most practical applications, differences or shifts in dynamics between the two environments are inevitable, potentially leading to safety violations after the transfer. This work aims to guarantee safety even when the new environment has different (safety-related) dynamics. In other words, we aim to make the process of safe transfer robust. Our methodology (1) robustifies an agent in the controlled environment and (2) provably provides---under mild assumption---a safe transfer to new environments. The empirical evaluation shows that this method yields policies that are robust against changes in dynamics, demonstrating safety after transfer to a new environment.

Live content is unavailable. Log in and register to view live content