dataset on github
2024 年 7 月 29 日
Small Molecule Optimization with Large Language Models
title: Small Molecule Optimization with Large Language Models
publish date:
2024-07-26
authors:
Philipp Guevorguian et.al.
paper id
2407.18897v1
download
abstracts:
Recent advancements in large language models have opened new possibilities for generative molecular drug design. We present Chemlactica and Chemma, two language models fine-tuned on a novel corpus of 110M molecules with computed properties, totaling 40B tokens. These models demonstrate strong performance in generating molecules with specified properties and predicting new molecular characteristics from limited samples. We introduce a novel optimization algorithm that leverages our language models to optimize molecules for arbitrary properties given limited access to a black box oracle. Our approach combines ideas from genetic algorithms, rejection sampling, and prompt optimization. It achieves state-of-the-art performance on multiple molecular optimization benchmarks, including an 8% improvement on Practical Molecular Optimization compared to previous methods. We publicly release the training corpus, the language models and the optimization algorithm.
QA:
coming soon
编辑整理: wanghaisheng 更新日期:2024 年 7 月 29 日