Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Distributed and Private Machine Learning

Membership Inference Attack on Graph Neural Networks

Iyiola Emmanuel Olatunji · Wolfgang Nejdl · Megha Khosla


Abstract:

We focus on how trained Graph Neural Network (GNN) models could leak information about the \emph{member} nodes that they were trained on. We introduce two realistic inductive settings for carrying out a membership inference attacks on GNNs. While choosing the simplest possible attack model that utilizes the posteriors of the trained model, we thoroughly analyze the properties of GNNs which dictate the differences in their robustness towards Membership Inference (MI) attack. The surprising and worrying fact is that the attack is successful even if the target model generalizes well. While in traditional machine learning models, overfitting is considered the main cause of such leakage, we show that in GNNs the additional structural information is the major contributing factor. We support our findings by extensive experiments on four representative GNN models. On a positive note, we identify properties of certain models which make them less vulnerable to MI attacks than others.

Chat is not available.