Skip to yearly menu bar Skip to main content


Virtual presentation / top 5% paper

Do We Really Need Complicated Model Architectures For Temporal Networks?

Weilin Cong · Si Zhang · Jian Kang · Baichuan Yuan · Hao Wu · Xin Zhou · Hanghang Tong · Mehrdad Mahdavi

Keywords: [ temporal graph ] [ link prediction ] [ Applications ]


Abstract:

Recurrent neural network (RNN) and self-attention mechanism (SAM) are the de facto methods to extract spatial-temporal information for temporal graph learning. Interestingly, we found that although both RNN and SAM could lead to a good performance, in practice neither of them is always necessary. In this paper, we propose GraphMixer, a conceptually and technically simple architecture that consists of three components: (1) a link-encoder that is only based on multi-layer perceptrons (MLP) to summarize the information from temporal links, (2) a node-encoder that is only based on neighbor mean-pooling to summarize node information, and (3) an MLP-based link classifier that performs link prediction based on the outputs of the encoders. Despite its simplicity, GraphMixer attains an outstanding performance on temporal link prediction benchmarks with faster convergence and better generalization performance. These results motivate us to rethink the importance of simpler model architecture.

Chat is not available.