Skip to yearly menu bar Skip to main content


Poster

GNNCert: Deterministic Certification of Graph Neural Networks against Adversarial Perturbations

Zaishuo Xia · Han Yang · Binghui Wang · Jinyuan Jia

Halle B #216
[ ]
Thu 9 May 7:30 a.m. PDT — 9:30 a.m. PDT
 
Oral presentation: Oral 6B
Thu 9 May 6:45 a.m. PDT — 7:30 a.m. PDT

Abstract:

Graph classification, which aims to predict a label for a graph, has many real-world applications such as malware detection, fraud detection, and healthcare. However, many studies show an attacker could carefully perturb the structure and/or node features in a graph such that a graph classifier misclassifies the perturbed graph. Such vulnerability impedes the deployment of graph classification in security/safety-critical applications. Existing empirical defenses lack formal robustness guarantees and could be broken by adaptive or unknown attacks. Existing provable defenses have the following limitations: 1) they achieve sub-optimal robustness guarantees for graph structure perturbation, 2) they cannot provide robustness guarantees for arbitrarily node feature perturbations, 3) their robustness guarantees are probabilistic, meaning they could be incorrect with a non-zero probability, and 4) they incur large computation costs. We aim to address those limitations in this work. We propose GNNCert, a certified defense against both graph structure and node feature perturbations for graph classification. Our GNNCert provably predicts the same label for a graph when the number of perturbed edges and the number of nodes with perturbed features are bounded. Our results on 8 benchmark datasets show that GNNCert outperforms three state-of-the-art methods.

Chat is not available.