Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Privacy Regulation and Protection in Machine Learning

Communication Efficient Differentially Private Federated Learning Using Second-Order Information

Mounssif Krouka · Antti Koskela · Tejas Kulkarni


Abstract:

Training machine learning models with differential privacy (DP) is commonly done using first-order methods such as DP-SGD. In the non-private setting, second-order methods try to mitigate the slow convergence of first-order methods. The DP methods that use second-order information still provide faster convergence, however the existing methods cannot be easily turned into federated learning (FL) algorithms without an excessive communication cost required by the exchange of the Hessian or feature covariance information between the nodes and the server. In this paper we propose DP-FedNew, a DP method for FL that uses second-order information and results in per-iteration communication cost similar to first-order methods such as DP Federated Averaging. Experiments on last layer fine tuning of deep convolutive networks demonstrate that our proposed algorithm is very competitive compared to first- and second-order baselines for both record- and user-level DP with different privacy budget values. We also consider a variant that avoids excessive memory and compute requirements at the edge devices and provide a theoretical analysis for the method that illustrates its stability.

Chat is not available.