Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Prioritized Generative Replay

Ren Wang · Kevin Frans · Pieter Abbeel · Sergey Levine · Alexei Efros

Hall 3 + Hall 2B #546
[ ]
Wed 23 Apr 7 p.m. PDT — 9:30 p.m. PDT
 
Oral presentation: Oral Session 2D
Thu 24 Apr 12:30 a.m. PDT — 2 a.m. PDT

Abstract:

Sample-efficient online reinforcement learning often uses replay buffers to store experience for reuse when updating the value function. However, uniform replay is inefficient, since certain classes of transitions can be more relevant to learning. While prioritization of more useful samples is helpful, this strategy can also lead to overfitting, as useful samples are likely to be more rare. In this work, we instead propose a prioritized, parametric version of an agent's memory, using generative models to capture online experience. This paradigm enables (1) densification of past experience, with new generations that benefit from the generative model's generalization capacity and (2) guidance via a family of relevance functions'' that push these generations towards more useful parts of an agent's acquired history. We show this recipe can be instantiated using conditional diffusion models and simple relevance functions such as curiosity- or value-based metrics. Our approach consistently improves performance and sample efficiency in both state- and pixel-based domains. We expose the mechanisms underlying these gains, showing how guidance promotes diversity in our generated transitions and reduces overfitting. We also showcase how our approach can train policies with even higher update-to-data ratios than before, opening up avenues to better scale online RL agents.

Live content is unavailable. Log in and register to view live content