Skip to yearly menu bar Skip to main content


Poster

Maximum a Posteriori Policy Optimisation

abbas abdolmaleki · Jost Tobias Springenberg · Nicolas Heess · Yuval Tassa · Remi Munos

East Meeting level; 1,2,3 #8

Abstract:

We introduce a new algorithm for reinforcement learning called Maximum a-posteriori Policy Optimisation (MPO) based on coordinate ascent on a relative-entropy objective. We show that several existing methods can directly be related to our derivation. We develop two off-policy algorithms and demonstrate that they are competitive with the state-of-the-art in deep reinforcement learning. In particular, for continuous control, our method outperforms existing methods with respect to sample efficiency, premature convergence and robustness to hyperparameter settings.

Live content is unavailable. Log in and register to view live content