Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Online Boundary-Free Continual Learning by Scheduled Data Prior

Hyunseo Koh · Minhyuk Seo · Jihwan Bang · Hwanjun Song · Deokki Hong · Seulki Park · Jung-Woo Ha · Jonghyun Choi

MH1-2-3-4 #64

Keywords: [ Deep Learning and representational learning ] [ boundary-free ] [ data prior ] [ continual learning ]


Abstract:

Typical continual learning setup assumes that the dataset is split into multiple discrete tasks. We argue that it is less realistic as the streamed data would have no notion of task boundary in real-world data. Here, we take a step forward to investigate more realistic online continual learning – learning continuously changing data distribution without explicit task boundary, which we call boundary-free setup. As there is no clear boundary of tasks, it is not obvious when and what information in the past to be preserved as a better remedy for the stability-plasticity dilemma. To this end, we propose a scheduled transfer of previously learned knowledge. We further propose a data-driven balancing between the knowledge in the past and the present in learning objective. Moreover, since it is not straight-forward to use the previously proposed forgetting measure without task boundaries, we further propose a novel forgetting measure based on information theory that can capture forgetting. We empirically evaluate our method on a Gaussian data stream, its periodic extension, which assumes periodic data distribution frequently observed in real-life data, as well as the conventional disjoint task-split. Our method outperforms prior arts by large margins in various setups, using four popular benchmark datasets – CIFAR-10, CIFAR-100, TinyImageNet and ImageNet.

Chat is not available.