Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Better Generative Replay for Continual Federated Learning

Daiqing Qi · Handong Zhao · Sheng Li

Keywords: [ federated learning ] [ continual learning ] [ Deep Learning and representational learning ]


Abstract:

Federated Learning (FL) aims to develop a centralized server that learns from distributed clients via communications without accessing the clients’ local data. However, existing works mainly focus on federated learning in a single task sce- nario with static data. In this paper, we introduce the continual federated learning (CFL) problem, where clients incrementally learn new tasks and history data can- not be stored due to certain reasons, such as limited storage and data retention policy 1. Generative replay (GR) based methods are effective for continual learning without storing history data. However, we fail when trying to intuitively adapt GR models for this setting. By analyzing the behaviors of clients during training, we find the unstable training process caused by distributed training on non-IID data leads to a notable performance degradation. To address this problem, we propose our FedCIL model with two simple but effective solutions: 1. model consolidation and 2. consistency enforcement. Experimental results on multiple benchmark datasets demonstrate that our method significantly outperforms baselines. Code is available at: https://github.com/daiqing98/FedCIL.

Chat is not available.