Skip to yearly menu bar Skip to main content


Spotlight

Understanding and Preventing Capacity Loss in Reinforcement Learning

Clare Lyle · Mark Rowland · Will Dabney

Abstract:

The reinforcement learning (RL) problem is rife with sources of non-stationarity that can destabilize or inhibit learning progress.We identify a key mechanism by which this occurs in agents using neural networks as function approximators: \textit{capacity loss}, whereby networks trained to predict a sequence of target values lose their ability to quickly fit new functions over time.We demonstrate that capacity loss occurs in a broad range of RL agents and environments, and is particularly damaging to learning progress in sparse-reward tasks. We then present a simple regularizer, Initial Feature Regularization (InFeR), that mitigates this phenomenon by regressing a subspace of features towards its value at initialization, improving performance over a state-of-the-art model-free algorithm in the Atari 2600 suite. Finally, we study how this regularization affects different notions of capacity and evaluate other mechanisms by which it may improve performance.

Chat is not available.