Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Is Forgetting Less a Good Inductive Bias for Forward Transfer?

Jiefeng Chen · Timothy Nguyen · Dilan Gorur · Arslan Chaudhry

MH1-2-3-4 #70

Keywords: [ Deep Learning and representational learning ] [ transfer learning ] [ continual learning ]


Abstract:

One of the main motivations of studying continual learning is that the problem setting allows a model to accrue knowledge from past tasks to learn new tasks more efficiently. However, recent studies suggest that the key metric that continual learning algorithms optimize, reduction in catastrophic forgetting, does not correlate well with the forward transfer of knowledge. We believe that the conclusion previous works reached is due to the way they measure forward transfer. We argue that the measure of forward transfer to a task should not be affected by the restrictions placed on the continual learner in order to preserve knowledge of previous tasks. Instead, forward transfer should be measured by how easy it is to learn a new task given a set of representations produced by continual learning on previous tasks. Under this notion of forward transfer, we evaluate different continual learning algorithms on a variety of image classification benchmarks. Our results indicate that less forgetful representations lead to a better forward transfer suggesting a strong correlation between retaining past information and learning efficiency on new tasks. Further, we found less forgetful representations to be more diverse and discriminative compared to their forgetful counterparts.

Chat is not available.