Skip to yearly menu bar Skip to main content


Poster

Keep Doing What Worked: Behavior Modelling Priors for Offline Reinforcement Learning

Abbas Abdolmaleki · Felix Berkenkamp · Nicolas Heess · Martin Riedmiller · Roland Hafner · Jost Tobias Springenberg · Thomas Lampe · Noah Y Siegel · Michael Neunert


Abstract:

Off-policy reinforcement learning algorithms promise to be applicable in settings where only a fixed data-set (batch) of environment interactions is available and no new experience can be acquired. This property makes these algorithms appealing for real world problems such as robot control. In practice, however, standard off-policy algorithms fail in the batch setting for continuous control. In this paper, we propose a simple solution to this problem. It admits the use of data generated by arbitrary behavior policies and uses a learned prior -- the advantage-weighted behavior model (ABM) -- to bias the RL policy towards actions that have previously been executed and are likely to be successful on the new task. Our method can be seen as an extension of recent work on batch-RL that enables stable learning from conflicting data-sources. We find improvements on competitive baselines in a variety of RL tasks -- including standard continuous control benchmarks and multi-task learning for simulated and real-world robots.

Chat is not available.