Skip to yearly menu bar Skip to main content


Pre-training Sequence, Structure, and Surface Features for Comprehensive Protein Representation Learning

Youhan Lee · Hasun Yu · Jaemyung Lee · Jaehoon Kim

Halle B #5
[ ]
Tue 7 May 7:30 a.m. PDT — 9:30 a.m. PDT


Proteins can be represented in various ways, including their sequences, 3D structures, and surfaces. While recent studies have successfully employed sequence- or structure-based representations to address multiple tasks in protein science, there has been significant oversight in incorporating protein surface information, a critical factor for protein function. In this paper, we present a pre-training strategy that incorporates information from protein sequences, 3D structures, and surfaces to improve protein representation learning. Specifically, we utilize Implicit Neural Representations (INRs) for learning surface characteristics, and name it ProteinINR. We confirm that ProteinINR successfully reconstructs protein surfaces, and integrate this surface learning into the existing pre-training strategy of sequences and structures. Our results demonstrate that our approach can enhance performance in various downstream tasks, thereby underscoring the importance of including surface attributes in protein representation learning. These findings underline the importance of understanding protein surfaces for generating effective protein representations.

Chat is not available.