Researchers have highlighted concerns regarding hallucinations in LLMs due to their generation of plausible but inaccurate or unrelated content. However, these hallucinations hold potential in creativity-driven fields like drug discovery, where innovation is essential. LLMs have been widely applied in scientific domains, such as materials science, biology, and chemistry, aiding tasks like molecular description and drug design. While traditional models like MolT5 offer domain-specific accuracy, LLMs often produce hallucinated outputs when not fine-tuned. Despite their lack of factual consistency, such outputs can provide valuable insights, such as high-level molecular descriptions and potential compound applications, thereby supporting exploratory processes in drug discovery.
Drug discovery, a costly and time-intensive process, involves evaluating vast chemical spaces and identifying novel solutions to biological challenges. Previous studies have used machine learning and generative models to assist in this field, with researchers exploring the integration of LLMs for molecule design, dataset curation, and prediction tasks. Hallucinations in LLMs, often viewed as a drawback, can mimic creative processes by recombining knowledge to generate novel ideas. This perspective aligns with creativity’s role in innovation, exemplified by groundbreaking accidental discoveries like penicillin. By leveraging hallucinated insights, LLMs could advance drug discovery by identifying molecules with unique properties and fostering high-level innovation.
ScaDS.AI and Dresden University of Technology researchers hypothesize that hallucinations can enhance LLM performance in drug discovery. Using seven instruction-tuned LLMs, including GPT-4o and Llama-3.1-8B, they incorporated hallucinated natural language descriptions of molecules’ SMILES strings into prompts for classification tasks. The results confirmed their hypothesis, with Llama-3.1-8B achieving an 18.35% ROC-AUC improvement over the baseline. Larger models and Chinese-generated hallucinations demonstrated the greatest gains. Analyses revealed that hallucinated text provides unrelated yet insightful information, aiding predictions. This study highlights hallucinations’ potential in pharmaceutical research and offers new perspectives on leveraging LLMs for innovative drug discovery.
To generate hallucinations, SMILES strings of molecules are translated into natural language using a standardized prompt where the system is defined as an “expert in drug discovery.” The generated descriptions are evaluated for factual consistency using the HHM-2.1-Open Model, with MolT5-generated text as the reference. Results show low factual consistency across LLMs, with ChemLLM scoring 20.89% and others averaging 7.42–13.58%. Drug discovery tasks are formulated as binary classification problems, predicting specific molecular properties via next-token prediction. Prompts include SMILES, descriptions, and task instructions, with models constrained to output “Yes” or “No” based on the highest probability.
The study examines how hallucinations generated by different LLMs impact performance in molecular property prediction tasks. Experiments use a standardized prompt format to compare predictions based on SMILES strings alone, SMILES with MolT5-generated descriptions, and hallucinated descriptions from various LLMs. Five MoleculeNet datasets were analyzed using ROC-AUC scores. Results show that hallucinations generally improve performance over SMILES or MolT5 baselines, with GPT-4o achieving the highest gains. Larger models benefit more from hallucinations, but improvements plateau beyond 8 billion parameters. Temperature settings influence hallucination quality, with intermediate values yielding the best performance enhancements.
In conclusion, the study explores the potential benefits of hallucinations in LLMs for drug discovery tasks. By hypothesizing that hallucinations can enhance performance, the research evaluates seven LLMs across five datasets using hallucinated molecule descriptions integrated into prompts. Results confirm that hallucinations improve LLM performance compared to baseline prompts without hallucinations. Notably, Llama-3.1-8B achieved an 18.35% ROC-AUC gain. GPT-4o-generated hallucinations provided consistent improvements across models. Findings reveal that larger model sizes generally benefit more from hallucinations, while factors like generation temperature have minimal impact. The study highlights hallucinations’ creative potential in AI and encourages further exploration of drug discovery applications.
Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 70k+ ML SubReddit.
The post Leveraging Hallucinations in Large Language Models to Enhance Drug Discovery appeared first on MarkTechPost.