ICLR 2018
Skip to yearly menu bar Skip to main content


Workshop

Universal Successor Representations for Transfer Reinforcement Learning

Chen Ma · Junfeng Wen · Yoshua Bengio

East Meeting Level 8 + 15 #1

The objective of transfer reinforcement learning is to generalize from a set of previous tasks to unseen new tasks. In this work, we focus on the transfer scenario where the dynamics among tasks are the same, but their goals differ. Although general value function (Sutton et al., 2011) has been shown to be useful for knowledge transfer, learning a universal value function can be challenging in practice. To attack this, we propose (1) to use universal successor representations (USR) to represent the transferable knowledge and (2) a USR approximator (USRA) that can be trained by interacting with the environment. Our experiments show that USR can be effectively applied to new tasks, and the agent initialized by the trained USRA can achieve the goal considerably faster than random initialization.

Live content is unavailable. Log in and register to view live content