In-Person Poster presentation / poster accept
Efficient Model Updates for Approximate Unlearning of Graph-Structured Data
Eli Chien · Chao Pan · Olgica Milenkovic
MH1-2-3-4 #156
Keywords: [ privacy ] [ graph unlearning ] [ machine unlearning ] [ Social Aspects of Machine Learning ]
Abstract:
With the adoption of recent laws ensuring the ``right to be forgotten'', the problem of machine unlearning has become of significant importance. This is particularly the case for graph-structured data, and learning tools specialized for such data, including graph neural networks (GNNs). This work introduces the first known approach for \emph{approximate graph unlearning} with provable theoretical guarantees. The challenges in addressing the problem are two-fold. First, there exist multiple different types of unlearning requests that need to be considered, including node feature, edge and node unlearning. Second, to establish provable performance guarantees, one needs to carefully evaluate the process of feature mixing during propagation. We focus on analyzing Simple Graph Convolutions (SGC) and their generalized PageRank (GPR) extensions, thereby laying the theoretical foundations for unlearning GNNs. Empirical evaluations of six benchmark datasets demonstrate excellent performance/complexity/privacy trade-offs of our approach compared to complete retraining and general methods that do not leverage graph information. For example, unlearning $200$ out of $1208$ training nodes of the Cora dataset only leads to a $0.1\%$ loss in test accuracy, but offers a $4$-fold speed-up compared to complete retraining with a $(\epsilon,\delta)=(1,10^{-4})$ ``privacy cost''. We also exhibit a $12\%$ increase in test accuracy for the same dataset when compared to unlearning methods that do not leverage graph information, with comparable time complexity and the same privacy guarantee.
Chat is not available.