Poster

Acceleration of Federated Learning with Alleviated Forgetting in Local Training

Chencheng Xu · Zhiwei Hong · Minlie Huang · Tao Jiang

Keywords: [ federated learning ]

[ Abstract ]
[ Visit Poster at Spot B2 in Virtual World ] [ OpenReview
Thu 28 Apr 6:30 p.m. PDT — 8:30 p.m. PDT

Abstract:

Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy by independently training local models on each client and then aggregating parameters on a central server, thereby producing an effective global model. Although a variety of FL algorithms have been proposed, their training efficiency remains low when the data are not independently and identically distributed (non-i.i.d.) across different clients. We observe that the slow convergence rates of the existing methods are (at least partially) caused by the catastrophic forgetting issue during the local training stage on each individual client, which leads to a large increase in the loss function concerning the previous training data provided at other clients. Here, we propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage by regularizing locally trained parameters with the loss on generated pseudo data, which encode the knowledge of previous training data learned by the global model. Our comprehensive experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep and the clients' data are extremely non-i.i.d., but is also able to protect privacy better in classification problems and more robust against gradient inversion attacks.

Chat is not available.