Poster

Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing

Sai Karimireddy · Lie He · Martin Jaggi

Keywords: [ distributed learning ] [ federated learning ]

[ Abstract ]
[ Visit Poster at Spot J2 in Virtual World ] [ OpenReview
Tue 26 Apr 10:30 a.m. PDT — 12:30 p.m. PDT
 
Spotlight presentation:

Abstract:

In Byzantine robust distributed or federated learning, a central server wants to train a machine learning model over data distributed across multiple workers. However, a fraction of these workers may deviate from the prescribed algorithm and send arbitrary messages. While this problem has received significant attention recently, most current defenses assume that the workers have identical data. For realistic cases when the data across workers are heterogeneous (non-iid), we design new attacks which circumvent current defenses, leading to significant loss of performance. We then propose a simple bucketing scheme that adapts existing robust algorithms to heterogeneous datasets at a negligible computational cost. We also theoretically and experimentally validate our approach, showing that combining bucketing with existing robust algorithms is effective against challenging attacks. Our work is the first to establish guaranteed convergence for the non-iid Byzantine robust problem under realistic assumptions.

Chat is not available.