Skip to yearly menu bar Skip to main content


Virtual presentation / top 25% paper

Serving Graph Compression for Graph Neural Networks

Si Si · Felix Yu · Ankit Singh Rawat · Cho-Jui Hsieh · Sanjiv Kumar

Keywords: [ model compression ] [ graph neural networks ] [ Deep Learning and representational learning ]


Abstract:

Serving a GNN model online is challenging --- in many applications when testing nodes are connected to training nodes, one has to propagate information from training nodes to testing nodes to achieve the best performance, and storing the whole training set (including training graph and node features) during inference stage is prohibitive for large-scale problems. In this paper, we study graph compression to reduce the storage requirement for GNN in serving. Given a GNN model to be served, we propose to construct a compressed graph with a smaller number of nodes. In serving time, one just needs to replace the original training set graph by this compressed graph, without the need of changing the actual GNN model and the forward pass. We carefully analyze the error in the forward pass and derive simple ways to construct the compressed graph to minimize the approximation error. Experimental results on semi-supervised node classification demonstrate that the proposed method can significantly reduce the serving space requirement for GNN inference.

Chat is not available.