Skip to yearly menu bar Skip to main content


Poster

Trust-PCL: An Off-Policy Trust Region Method for Continuous Control

Ofir Nachum · Mohammad Norouzi · Kelvin Xu · Dale Schuurmans

East Meeting level; 1,2,3 #18

Abstract:

Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL, which exploits an observation that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. The introduction of relative entropy regularization allows Trust-PCL to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL significantly improves the solution quality and sample efficiency of TRPO.

Live content is unavailable. Log in and register to view live content