Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Integrating Generative and Experimental Platforms for Biomolecular Design

Aligning Chemical and Protein Language Models with Continuous Feedback using Energy Rank Alignment

Shriram Chennakesavalu · Frank Hu · Sebastian Ibarraran · Grant Rotskoff


Abstract:

Large, autoregressive models trained on databases of chemical compounds and biomolecules have yielded powerful generators, but we still lack robust strategies for controlled generation. This molecular search problem closely resembles the ``alignment'' problem for large language models, though for many chemical tasks we have a specific and easily evaluable reward function. Here, we introduce an algorithm called energy rank alignment (ERA) that leverages an explicit reward function to produce a gradient-based objective that we use to optimize autoregressive policies. We deploy this approach to align molecular transformers and protein language models to generate molecules and protein sequences, respectively, with externally specified properties and find that it does so robustly, searching through diverse parts of chemical space. The algorithm is highly scalable, does not require reinforcement learning, and performs well relative to DPO when the number of preference observations per pairing is small.

Chat is not available.