Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bridging the Gap Between Practice and Theory in Deep Learning

Low-Rank Robust Graph Contrastive Learning

Yancheng Wang · Yingzhen Yang


Abstract:

Graph Neural Networks (GNNs) have been widely used to learn node representations and with outstanding performance on various tasks such as node classification. However, noise, which inevitably exists in real-world graph data, would considerably degrade the performance of GNNs revealed by recent studies. In this work, we propose a novel and robust method, Low-Rank Robust Graph Contrastive Learning (LR-RGCL). LR-RGCL performs transductive node classification in two steps. First, a robst GCL encoder named RGCL is trained by prototypical contrastive learning with Bayesian nonparametric Prototype Learning (BPL). Next, using the robust features produced by RGCL, a novel and provable low-rank transductive classification algorithm is used to classify the unlabeled nodes in the graph. Our low-rank transductive classification algorithm is inspired by the low frequency property of the graph data and its labels, and theoretical result on the generalization of our algorithm is provided. To the best of our knowledge, our theoretical result is among the first to demonstrate the advantage of low-rank learning in transductive classification. Extensive experiments on public benchmarks demonstrate the superior performance of LR-RGCL and the robustness of the learned node representations. The code of LR-RGCL is available at \url{https://anonymous.4open.science/r/LRR-GCL-3B3C/}.

Chat is not available.