Skip to yearly menu bar Skip to main content


Poster

Recurrent Experience Replay in Distributed Reinforcement Learning

Steven Kapturowski · Georg Ostrovski · John Quan · Remi Munos · Will Dabney

Great Hall BC #68

Keywords: [ experience replay ] [ distributed training ] [ rnn ] [ lstm ] [ reinforcement learning ]


Abstract:

Building on the recent successes of distributed training of RL agents, in this paper we investigate the training of RNN-based RL agents from distributed prioritized experience replay. We study the effects of parameter lag resulting in representational drift and recurrent state staleness and empirically derive an improved training strategy. Using a single network architecture and fixed set of hyper-parameters, the resulting agent, Recurrent Replay Distributed DQN, quadruples the previous state of the art on Atari-57, and matches the state of the art on DMLab-30. It is the first agent to exceed human-level performance in 52 of the 57 Atari games.

Live content is unavailable. Log in and register to view live content