Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Poster
Wed May 08 09:00 AM -- 11:00 AM (PDT) @ Great Hall BC #82
Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference
Matt Riemer · Juan Ignacio Cases Martin · Robert Ajemian · Miao Liu · Irina Rish · Yuhai Tu · Gerald Tesauro
[ PDF

Lack of performance when it comes to continual learning over non-stationary distributions of data remains a major challenge in scaling neural network learning to more human realistic settings. In this work we propose a new conceptualization of the continual learning problem in terms of a temporally symmetric trade-off between transfer and interference that can be optimized by enforcing gradient alignment across examples. We then propose a new algorithm, Meta-Experience Replay (MER), that directly exploits this view by combining experience replay with optimization based meta-learning. This method learns parameters that make interference based on future gradients less likely and transfer based on future gradients more likely. We conduct experiments across continual lifelong supervised learning benchmarks and non-stationary reinforcement learning environments demonstrating that our approach consistently outperforms recently proposed baselines for continual learning. Our experiments show that the gap between the performance of MER and baseline algorithms grows both as the environment gets more non-stationary and as the fraction of the total experiences stored gets smaller.