Invited Talk
in
Workshop: Workshop on Distributed and Private Machine Learning
Inference Risks for Machine Learning
David Evans
Abstract: When models are trained on private data, such as medical records or personal emails, there is a risk that those models not only learn the hoped-for patterns, but will also learn and expose sensitive information about their training data. Several different types of inference attacks on machine learning models have been found, and we will characterize inference risks according to whether they expose statistical properties of the distribution used for training or specific information in the training dataset. Differential privacy provides formal guarantees bounding some (but not all) types of inference risk, but providing substantive differential privacy guarantees with state-of-the-art methods requires adding so much noise to the training process for complex models that the resulting models are useless. Experimental evidence, however, suggests that in practice inference attacks have limited power, and in many cases, a very small amount of privacy noise seems to be enough to defuse inference attacks. In this talk, I will overview of a variety of different inference risks for machine learning models and report on some experiments to better understand the power of inference attacks in more realistic settings.
Bio: David Evans is a Professor of Computer Science at the University of Virginia where he leads a research group focusing on security and privacy (https://uvasrg.github.io). He won the Outstanding Faculty Award from the State Council of Higher Education for Virginia, and was Program Co-Chair for the 24th ACM Conference on Computer and Communications Security (CCS 2017) and the 30th (2009) and 31st (2010) IEEE Symposia on Security and Privacy, where he initiated the Systematization of Knowledge (SoK) papers. He is the author of an open computer science textbook (https://computingbook.org) and a children's book on combinatorics and computability (https://dori-mic.org), and co-author of a book on secure multi-party computation (https://securecomputation.org/). He has SB, SM and PhD degrees from MIT and has been a faculty member at the University of Virginia since 1999.