Poster
Exploiting Hidden Symmetry to Improve Objective Perturbation for DP linear learners with a nonsmooth L1-norm
Du Chen · Geoffrey A. Chua
[
Abstract
]
Abstract:
Objective Perturbation (OP) is a classic approach to differentially private (DP) convex optimization with smooth loss functions but is less understood for nonsmooth cases. In this work, we study how to apply OP to DP linear learners under loss functions with an implicit ℓ1-norm structure, such as max(0,x) as a motivating example. We propose to first smooth out the implicit ℓ1-norm by convolution, and then invoke standard OP. Convolution has many advantages that distinguish itself from Moreau Envelope, such as approximating from above and a higher degree of hyperparameters. These advantages, in conjunction with the symmetry of ℓ1-norm, result in tighter pointwise approximation, which further facilitates tighter analysis of generalization risks by using pointwise bounds. Under mild assumptions on groundtruth distributions, the proposed OP-based algorithm is found to be rate-optimal, and can achieve the excess generalization risk O(1√n+√dln(1/δ)nε). Experiments demonstrate the competitive performance of the proposed method to Noisy-SGD.
Live content is unavailable. Log in and register to view live content