Poster
Decentralized Sporadic Federated Learning: A Unified Algorithmic Framework with Convergence Guarantees
Shahryar Zehtabi · Dong-Jun Han · Rohit Parasnis · Seyyedali Hosseinalipour · Christopher Brinton
Hall 3 + Hall 2B #442
Abstract:
Decentralized federated learning (DFL) captures FL settings where both (i) model updates and (ii) model aggregations are exclusively carried out by the clients without a central server. Existing DFL works have mostly focused on settings where clients conduct a fixed number of local updates between local model exchanges, overlooking heterogeneity and dynamics in communication and computation capabilities. In this work, we propose Decentralized Sporadic Federated Learning (DSpodFLDSpodFL), a DFL methodology built on a generalized notion of *sporadicity* in both local gradient and aggregation processes. DSpodFLDSpodFL subsumes many existing decentralized optimization methods under a unified algorithmic framework by modeling the per-iteration (i) occurrence of gradient descent at each client and (ii) exchange of models between client pairs as arbitrary indicator random variables, thus capturing *heterogeneous and time-varying* computation/communication scenarios. We analytically characterize the convergence behavior of DSpodFLDSpodFL for both convex and non-convex models and for both constant and diminishing learning rates, under mild assumptions on the communication graph connectivity, data heterogeneity across clients, and gradient noises. We show how our bounds recover existing results from decentralized gradient descent as special cases. Experiments demonstrate that DSpodFLDSpodFL consistently achieves improved training speeds compared with baselines under various system settings.
Live content is unavailable. Log in and register to view live content