Skip to yearly menu bar Skip to main content


Poster

Maximum a Posteriori Policy Optimisation

abbas abdolmaleki · Jost Springenberg · Nicolas Heess · Yuval Tassa · Remi Munos

[ ]
[ PDF
2018 Poster

Abstract:

We introduce a new algorithm for reinforcement learning called Maximum a-posteriori Policy Optimisation (MPO) based on coordinate ascent on a relative-entropy objective. We show that several existing methods can directly be related to our derivation. We develop two off-policy algorithms and demonstrate that they are competitive with the state-of-the-art in deep reinforcement learning. In particular, for continuous control, our method outperforms existing methods with respect to sample efficiency, premature convergence and robustness to hyperparameter settings.

Chat is not available.