Efficient Continual Learning with Modular Networks and Task-Driven Priors

Tom Veniat · Ludovic Denoyer · Marc'Aurelio Ranzato


Keywords: [ lifelong learning ] [ neural network ] [ continual learning ] [ benchmark ] [ modular network ]

[ Abstract ]
[ Slides [ Paper ]
Wed 5 May 1 a.m. PDT — 3 a.m. PDT


Existing literature in Continual Learning (CL) has focused on overcoming catastrophic forgetting, the inability of the learner to recall how to perform tasks observed in the past. There are however other desirable properties of a CL system, such as the ability to transfer knowledge from previous tasks and to scale memory and compute sub-linearly with the number of tasks. Since most current benchmarks focus only on forgetting using short streams of tasks, we first propose a new suite of benchmarks to probe CL algorithms across these new axes. Finally, we introduce a new modular architecture, whose modules represent atomic skills that can be composed to perform a certain task. Learning a task reduces to figuring out which past modules to re-use, and which new modules to instantiate to solve the current task. Our learning algorithm leverages a task-driven prior over the exponential search space of all possible ways to combine modules, enabling efficient learning on long streams of tasks. Our experiments show that this modular architecture and learning algorithm perform competitively on widely used CL benchmarks while yielding superior performance on the more challenging benchmarks we introduce in this work. The Benchmark is publicly available at

Chat is not available.