Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Distributed and Private Machine Learning

Distributed Gaussian Differential Privacy Via Shuffling

Kan Chen · Qi Long


Abstract: Traditionally, there are two models for implementing differential privacy: local model and centralized model. \emph{Shuffled model} is a relatively new model that aims to provide greater accuracy while preserving privacy by shuffling batches of similar data. In this paper, we consider the analytic privacy study of a \emph{shuffled model} for ``$f$-differential privacy''($f$-DP), a new relaxation of traditional $(\epsilon,\delta)$-differential privacy. We provide a powerful technique to import the existing \emph{shuffled model} results proven for the $(\epsilon,\delta)$-DP to $f$-DP, with which we derive a simple and easy-to-interpret theorem of privacy amplification by shuffling for $f$-DP. Furthermore, we prove that compared with the original \emph{shuffled model} from \cite{cheu2019distributed}, $f$-DP provides a tighter upper bound in terms of the privacy analysis of sum queries. The approach of $f$-DP can be applied to broader classes of models to achieve more accurate privacy analysis

Chat is not available.