Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Socially Responsible Machine Learning

FedER: Communication-Efficient Byzantine-Robust Federated Learning

Yukun Jiang · Xiaoyu Cao · Hao Chen · Neil Gong


Abstract:

In this work, we propose FedER, a federated learning method that is both efficient and robust. Our key idea is to reduce the communication cost of the state-of-the-art robust FL method via pruning the model updates. Specifically, the server collects a small clean dataset, which is split into a training set and a validation set. In each round of FL, the clients prune their model updates before sending them to the server. The server also derives a server model update based on the training set and prunes it. The server determines the pruning fraction via evaluating the model accuracy on the validation set. We further propose mutual masking for each client, which computes the parameters in the overlapping area of pruned client model update and server model update. The mutual mask is used to filter out the parameters of unusual dimensions in malicious updates. We also occasionally normalize the masked client model updates to limit the impact of attacks. Our extensive experiments show that FedER 1) significantly reduces the communication cost for clients in adversarial settings and 2) achieves comparable or even better robustness compared to the state-of-the-art Byzantine-robust method.

Chat is not available.