Poster
Scaling up the Banded Matrix Factorization Mechanism for Large Scale Differentially Private ML
Ryan McKenna
Hall 3 + Hall 2B #547
[
Abstract
]
Fri 25 Apr 7 p.m. PDT
— 9:30 p.m. PDT
Abstract:
Correlated noise mechanisms such as DP Matrix Factorization (DP-MF) have proven to be effective alternatives to DP-SGD in large-epsilon few-epoch training regimes. Significant work has been done to find the best correlated noise strategies, and the current state-of-the-art approach is DP-BandMF , which optimally balances the benefits of privacy amplification and noise correlation. Despite it's utility advantages, severe scalability limitations prevent this mechanism from handling large-scale training scenarios where the number of training iterations may be more than 104 and the number of model parameters may exceed 107. In this work, we present techniques to scale up DP-BandMF along these two dimensions, significantly extending it's reach and enabling it to effectively handle settings with over 106 training iterations and 109 model parameters, with no utility degradation at smaller scales.
Live content is unavailable. Log in and register to view live content