Poster
Near-Exact Privacy Amplification for Matrix Mechanisms
Christopher Choquette-Choo · Arun Ganesh · Saminul Haque · Thomas Steinke · Abhradeep Guha Thakurta
Hall 3 + Hall 2B #515
[
Abstract
]
Fri 25 Apr midnight PDT
— 2:30 a.m. PDT
Abstract:
We study the problem of computing the privacy parameters for DP machine learning when using privacy amplification via random batching and noise correlated across rounds via a correlation matrix CC (i.e., the matrix mechanism). Past work on this problem either only applied to banded C, or gave loose privacy parameters. In this work, we give a framework for computing near-exact privacy parameters for any lower-triangular, non-negative C. Our framework allows us to optimize the correlation matrix C while accounting for amplification, whereas past work could not. Empirically, we show this lets us achieve smaller RMSE on prefix sums than the previous state-of-the-art (SOTA). We also show that we can improve on the SOTA performance on deep learning tasks. Our two main technical tools are (i) using Monte Carlo accounting to bypass composition, which was the main technical challenge for past work, and (ii) a balls-in-bins'' batching scheme that enables easy privacy analysis and is closer to practical random batching than Poisson sampling.
Live content is unavailable. Log in and register to view live content