In-Person Poster presentation / poster accept

Individual Privacy Accounting with Gaussian Differential Privacy

Antti Koskela · Marlon Tobaben · Antti Honkela

MH1-2-3-4 #156

Keywords: [ Social Aspects of Machine Learning ] [ differential privacy ] [ gaussian differential privacy ] [ privacy accounting ] [ fully adaptive compositions ] [ individual privacy loss ]

[ Abstract ]
[ Poster [ OpenReview
Wed 3 May 2:30 a.m. PDT — 4:30 a.m. PDT

Abstract: Individual privacy accounting enables bounding differential privacy (DP) loss individually for each participant involved in the analysis. This can be informative as often the individual privacy losses are considerably smaller than those indicated by the DP bounds that are based on considering worst-case bounds at each data access. In order to account for the individual losses in a principled manner, we need a privacy accountant for adaptive compositions of mechanisms, where the loss incurred at a given data access is allowed to be smaller than the worst-case loss. This kind of analysis has been carried out for the Rényi differential privacy by Feldman and Zrnic (2021), however not yet for the so-called optimal privacy accountants. We make first steps in this direction by providing a careful analysis using the Gaussian differential privacy which gives optimal bounds for the Gaussian mechanism, one of the most versatile DP mechanisms. This approach is based on determining a certain supermartingale for the hockey-stick divergence and on extending the Rényi divergence-based fully adaptive composition results by Feldman and Zrnic (2021). We also consider measuring the individual $(\varepsilon,\delta)$-privacy losses using the so-called privacy loss distributions. Using the Blackwell theorem, we can then use the results of Feldman and Zrnic (2021) to construct an approximative individual $(\varepsilon,\delta)$-accountant. We also show how to speed up the FFT-based individual DP accounting using the Plancherel theorem.

Chat is not available.