Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Integrating Generative and Experimental Platforms for Biomolecular Design

Towards Scaling Laws for Language Model Powered Evolutionary Algorithms: Case Study on Molecular Optimization

Tigran Fahradyan · Filya Geikyan · Philipp Guevorguian · Hrant Khachatrian


Abstract:

The improvement of large language models (LLMs) came from scalingpretraining. However, a new scaling paradigm emerged called test-time compute, whichuses more computation at the inference time of language models to get better results. There have been extensive works suggesting various test-time compute scaling strategies, but the modeling of the scaling dynamics of these methods is still an open research question. In this work we try to bridge this gap, developing a parametric law for language model-enhanced evolutionary algorithms depending on the language model parameters (N) and number of evolutionary iterations (k) used. We show that in molecular optimization tasks, our law is able to accurately extrapolate 2.5 times in N and k. Additionally, our law suggests that there is a tradeoff between N and k, which we validate by matching the performance of a 3.2B model with an 8.5 times smaller 380M model using 2.3 times more evolutionary algorithms steps.

Chat is not available.