Skip to yearly menu bar Skip to main content


Poster

Large-Scale Study of Curiosity-Driven Learning

Yuri Burda · Harrison Edwards · Deepak Pathak · Amos Storkey · Trevor Darrell · Alexei Efros

Great Hall BC #76

Keywords: [ no-reward ] [ no extrinsic reward ] [ intrinsic reward ] [ curiosity ] [ exploration ] [ unsupervised ] [ skills ]


Abstract: Reinforcement learning algorithms rely on carefully engineered rewards from the environment that are extrinsic to the agent. However, annotating each environment with hand-designed, dense rewards is difficult and not scalable, motivating the need for developing reward functions that are intrinsic to the agent. Curiosity is such intrinsic reward function which uses prediction error as a reward signal. In this paper: (a) We perform the first large-scale study of purely curiosity-driven learning, i.e. {\em without any extrinsic rewards}, across $54$ standard benchmark environments, including the Atari game suite. Our results show surprisingly good performance as well as a high degree of alignment between the intrinsic curiosity objective and the hand-designed extrinsic rewards of many games. (b) We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We demonstrate limitations of the prediction-based rewards in stochastic setups. Game-play videos and code are at https://doubleblindsupplementary.github.io/large-curiosity/.

Live content is unavailable. Log in and register to view live content