SmileyLlama: Modifying Large Language Models for Directed Chemical Space Exploration

arXiv:2409.02231v5 Announce Type: replace-cross Abstract: We show that large language model (LLMs) can be transformed via supervised fine-tuning (SFT) of engineered prompts into SmileyLlama for exploring the chemical space of drug molecules. We benchmark SmileyLlama against pre-trained LLMs and chemical language models (CLM) trained from scratch for generating valid and novel drug-like molecules, and use direct preference optimization (DPO) to both improve SmileyLlama's adherence to a prompt and as part of the iMiner reinforcement learning framework to predict molecules with optimized 3D conformations and high binding affinity to drug targets. By training an LLM to speak directly as a CLM, while retaining most of its natural language capabilities, we show that we can reliably generate molecules with user-specified properties rather than acting only as a chatbot with knowledge of chemistry or as a virtual assistant. While SmileyLlama is geared toward drug discovery, the SFT/DPO/LLM framework can be extended to other chemical, biological, and materials applications.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top