Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: Navigating and Addressing Data Problems for Foundation Models (DPFM)

Invited Talk #4 - Characterizing Machine Unlearning through Definitions and Implementations [Speaker: Nicolas Papernot (University of Toronto & Vector Institute)]

Nicolas Papernot


Abstract:

Abstract: The talk presents open problems in the study of machine unlearning. The need for machine unlearning, i.e., obtaining a model one would get without training on a subset of data, arises from privacy legislation and as a potential solution to data poisoning or copyright claims. The first part of the talk discusses approaches that provide exact unlearning: these approaches output the same distribution of models as would have been obtained by training without the subset of data to be unlearned in the first place. While such approaches can be computationally expensive, we discuss why it is difficult to relax the guarantee they provide to pave the way for more efficient approaches. The second part of the talk asks if we can verify unlearning. Here we show how an entity can claim plausible deniability when challenged about an unlearning request that was claimed to be processed, and conclude that at the level of model weights, being unlearnt is not always a well-defined property. Instead, unlearning is an algorithmic property.

Bio: Nicolas Papernot is an Assistant Professor at the University of Toronto, in the Department of Electrical and Computer Engineering and the Department of Computer Science. He is also a faculty member at the Vector Institute where he hold a Canada CIFAR AI Chair, and a faculty affiliate at the Schwartz Reisman Institute. He was named an Alfred P. Sloan Research Fellow in Computer Science in 2022 and a Member of the Royal Society of Canada College in 2023. His research interests are at the intersection of security, privacy, and machine learning. His research has been cited in the press, including the BBC, New York Times, Popular Science, The Atlantic, the Wall Street Journal and Wired. He currently serve as a Program Committee Chair of the IEEE Conference on Secure and Trustworthy Machine Learning (SaTML), which he co-founded in 2023. He earned my Ph.D. in Computer Science and Engineering at the Pennsylvania State University, working with Prof. Patrick McDaniel and supported by a Google PhD Fellowship. Upon graduating, he joined Google Brain for a year; He continue to spend time at Google DeepMind.

Chat is not available.