Poster
in
Workshop: Generative and Experimental Perspectives for Biomolecular Design
Fine-tuning Pocket-conditioned 3D Molecule Generation via Reinforcement Learning
Daeseok Lee · Yongjun Cho
In drug discovery, generating molecules that bind to target proteins while possessing desired chemical properties is a fundamental challenge. In the early stages of deep learning for molecule generation, most models did not explicitly model the interactions between the generated molecules and the target protein. Instead, they generated molecules as SMILES strings or graphs and considered target proteins only through docking scores. This approach faced limitations in generalization and required retraining of the model for different targets. In contrast, the novel pocket-conditioned 3D molecule generation approach demonstrates significant improvements by generating 3D molecular structures from protein binding pocket inputs. For example, Pocket2Mol, an atom-autoregressive model, shows superior performance in creating molecules with high binding affinity and desirable chemical properties. However, it has limitations, including the production of unrealistic stereochemical structures and constraints on the properties of generated molecules, the latter stemming from its training to replicate the molecules in the training set. To overcome these limitations, we propose a reinforcement learning method to fine-tune the model to generate molecules with enhanced properties. To demonstrate its effectiveness, we conducted an experiment where we used our method to minimize the model's stereochemical issues and enhance the drug-likeness and binding affinity of the generated molecules. Our results show that this approach not only resolves Pocket2Mol's existing problems but also establishes new benchmarks for molecule generation metrics, highlighting our method's potential to advance molecule generation. The source code for inference and instructions on reproduction can be found at https://github.com/deargen/Pocket2MolRLpublic.