Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Machine Learning for Drug Discovery (MLDD)

GRPE: Relative Positional Encoding for Graph Transformer

Wonpyo Park · Woong-Gi Chang · Donggeon Lee · Juntae Kim · seung-won hwang

Keywords: [ graph representation learning ] [ transformer ]


Abstract:

Designing an efficient model to encode graphs is a key challenge of molecular representation learning. Transformer built upon efficient self-attention is a natural choice for graph processing, but it requires explicit incorporation of positional information. Existing approaches either linearize a graph to encode absolution position in the sequence of nodes, or encode relative position with another node using bias terms. The former loses preciseness of relative position from linearization, while the latter loses a tight integration of node-edge and node-spatial information. In this work, we propose relative positional encoding for a graph to overcome the weakness of the previous approaches. Our method encodes a graph without linearization and considers both node-spatial relation and node-edge relation. We name our method Graph Relative Positional Encoding dedicated to graph representation learning.Experiments conducted on various molecular property prediction datasets show that the proposed method outperforms previous approaches significantly. Our code is publicly available at https://github.com/lenscloth/GRPE}.

Chat is not available.