Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Pitfalls of limited data and computation for Trustworthy ML

Differentially Private Federated Few-shot Image Classification

Aliaksandra Shysheya · Marlon Tobaben · John Bronskill · Andrew Paverd · Shruti Tople · Santiago Zanella-Beguelin · Richard E Turner · Antti Honkela


Abstract:

In Federated Learning (FL), the role of a central server is to simply aggregate the gradient or parameter updates sent by an array of remote clients, which perform local model training using their individual data. Even though the server in FL does not have access to raw user data, the privacy of users may still be compromised through model parameters. To mitigate this and provide guaranteed level of privacy, user-level differentially private (DP) FL aggregation methods can be employed which are able to achieve accuracy approaching that of non-private training when there is a sufficient number of remote clients. In most practical distributed learning scenarios, the amount of labelled data each client has is usually limited, necessitating few-shot learning approaches. An effective approach to few-shot learning is transfer learning where the model employs a backbone pretrained on large public datasets and then fine-tunes it on a downstream dataset. A key advantage of transfer learning systems is that they can be made extremely parameter efficient by updating only a small subset of model parameters during fine-tuning.This advantage is extremely beneficial in the FL setting, as it helps minimize the communication cost spent on each client-server communication during training by transferring only those model parameters that need to be updated. To understand under which conditions DP FL few-shot transfer learning can be effective, we perform a set of experiments that reveals how the accuracy of DP FL image classification systems is affected as the model architecture, dataset, and subset of learnable parameters in the model varies. We evaluate on three FL datasets, establishing state-of-the-art performance on the challenging FLAIR federated learning benchmark.

Chat is not available.