Skip to yearly menu bar Skip to main content


Poster

Anatomy of Catastrophic Forgetting: Hidden Representations and Task Semantics

Vinay Ramasesh · Ethan Dyer · Maithra Raghu

Keywords: [ representation learning ] [ continual learning ] [ catastrophic forgetting ] [ representation analysis ]


Abstract:

Catastrophic forgetting is a recurring challenge to developing versatile deep learning models. Despite its ubiquity, there is limited understanding of its connections to neural network (hidden) representations and task semantics. In this paper, we address this important knowledge gap. Through quantitative analysis of neural representations, we find that deeper layers are disproportionately responsible for forgetting, with sequential training resulting in an erasure of earlier task representational subspaces. Methods to mitigate forgetting stabilize these deeper layers, but show diversity on precise effects, with some increasing feature reuse while others store task representations orthogonally, preventing interference. These insights also enable the development of an analytic argument and empirical picture relating forgetting to task semantic similarity, where we find that maximal forgetting occurs for task sequences with intermediate similarity.

Chat is not available.