Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bridging the Gap Between Practice and Theory in Deep Learning

How Can Graph Neural Networks Learn to Perform Latent Representation Inference?

Hongyi Guo · Zhaoran Wang


Abstract:

In this work, we theoretically examine the process by which graph neural networks (GNNs) infer latent representations for each node within a homogeneous graph. To address this, we begin by identifying what a desired representation is. We base our theory on the exchangeability of the graph, a premise often explicitly or implicitly assumed by enabling shared parameters among all nodes. We demonstrate that this exchangeability suggests the existence of a latent variable model, wherein the latent posterior constitutes a minimal, yet sufficient representation. We develop a self-supervised learning mechanism that leverages observed edges to predict potential edges for a newly introduced node. This latent inference procedure is parameterized using GNNs. We prove that with the correct parameters, which can be learned end-to-end, GNNs are capable of inferring the latent posterior with an approximation error that diminishes relative to the order of the graph and the depth of the GNN. For the training phase, we offer theoretical guarantees for the generalization error under both self-supervised and supervised learning mechanisms.

Chat is not available.