Poster
Modeling the Long Term Future in Model-Based Reinforcement Learning
Nan Rosemary Ke · Amanpreet Singh · Ahmed Touati · Anirudh Goyal · Yoshua Bengio · Devi Parikh · Dhruv Batra
Great Hall BC #37
Keywords: [ model-based reinforcement learning ] [ variation inference ]
In model-based reinforcement learning, the agent interleaves between model learning and planning. These two components are inextricably intertwined. If the model is not able to provide sensible long-term prediction, the executed planer would exploit model flaws, which can yield catastrophic failures. This paper focuses on building a model that reasons about the long-term future and demonstrates how to use this for efficient planning and exploration. To this end, we build a latent-variable autoregressive model by leveraging recent ideas in variational inference. We argue that forcing latent variables to carry future information through an auxiliary task substantially improves long-term predictions. Moreover, by planning in the latent space, the planner's solution is ensured to be within regions where the model is valid. An exploration strategy can be devised by searching for unlikely trajectories under the model. Our methods achieves higher reward faster compared to baselines on a variety of tasks and environments in both the imitation learning and model-based reinforcement learning settings.
Live content is unavailable. Log in and register to view live content