CPR: Classifier-Projection Regularization for Continual Learning

Sungmin Cha · Hsiang Hsu · Taebaek Hwang · Flavio Calmon · Taesup Moon


Keywords: [ wide local minima ] [ regularization ] [ continual learning ]

[ Abstract ]
[ Slides
[ Paper ]
Wed 5 May 5 p.m. PDT — 7 p.m. PDT


We propose a general, yet simple patch that can be applied to existing regularization-based continual learning methods called classifier-projection regularization (CPR). Inspired by both recent results on neural networks with wide local minima and information theory, CPR adds an additional regularization term that maximizes the entropy of a classifier's output probability. We demonstrate that this additional term can be interpreted as a projection of the conditional probability given by a classifier's output to the uniform distribution. By applying the Pythagorean theorem for KL divergence, we then prove that this projection may (in theory) improve the performance of continual learning methods. In our extensive experimental results, we apply CPR to several state-of-the-art regularization-based continual learning methods and benchmark performance on popular image recognition datasets. Our results demonstrate that CPR indeed promotes a wide local minima and significantly improves both accuracy and plasticity while simultaneously mitigating the catastrophic forgetting of baseline continual learning methods. The codes and scripts for this work are available at

Chat is not available.