Poisoning scientific knowledge using large language models

bioRxiv (Cold Spring Harbor Laboratory)(2023)

引用 0|浏览10
暂无评分
摘要
Biomedical knowledge graphs constructed from scientific literature have been widely used to validate biological discoveries and generate new hypotheses. Recently, large language models (LLMs) have demonstrated a strong ability to generate human-like text data. While most of these text data have been useful, LLM might also be used to generate malicious content. Here, we investigate whether it is possible that a malicious actor can use LLM to generate a malicious paper that poisons scientific knowledge graphs and further affects downstream biological applications. As a proof-of-concept, we develop Scorpius, a conditional text generation model that generates a malicious paper abstract conditioned on a promoting drug and a target disease. The goal is to fool the knowledge graph constructed from a mixture of this malicious abstract and millions of real papers so that knowledge graph consumers will misidentify this promoting drug as relevant to the target disease. We evaluated Scorpius on a knowledge graph constructed from 3,818,528 papers and found that Scorpius can increase the relevance of 71.3% drug disease pairs from the top 1000 to the top 10 by only adding one malicious abstract. Moreover, the generation of Scorpius achieves better perplexity than ChatGPT, suggesting that such malicious abstracts cannot be efficiently detected by humans. Collectively, Scorpius demonstrates the possibility of poisoning scientific knowledge graphs and manipulating downstream applications using LLMs, indicating the importance of accountable and trustworthy scientific knowledge discovery in the era of LLM. ### Competing Interest Statement The authors have declared no competing interest.
更多
查看译文
关键词
medical knowledge,large language models,language models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要