Taiyi: a bilingual fine-tuned large language model for diverse biomedical tasks

Ling Luo,Jinzhong Ning,Yingwen Zhao, Zhijun Wang,Zeyuan Ding,Peng Chen, Weiru Fu, Qinyu Han, Guangtao Xu, Yunzhi Qiu,Dinghao Pan,Jiru Li,Hao Li, Wenduo Feng, Senbo Tu, Yuqi Liu,Zhihao Yang,Jian Wang,Yuanyuan Sun,Hongfei Lin

JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION(2024)

引用 0|浏览37
暂无评分
摘要
Objective Most existing fine-tuned biomedical large language models (LLMs) focus on enhancing performance in monolingual biomedical question answering and conversation tasks. To investigate the effectiveness of the fine-tuned LLMs on diverse biomedical natural language processing (NLP) tasks in different languages, we present Taiyi, a bilingual fine-tuned LLM for diverse biomedical NLP tasks.Materials and Methods We first curated a comprehensive collection of 140 existing biomedical text mining datasets (102 English and 38 Chinese datasets) across over 10 task types. Subsequently, these corpora were converted to the instruction data used to fine-tune the general LLM. During the supervised fine-tuning phase, a 2-stage strategy is proposed to optimize the model performance across various tasks.Results Experimental results on 13 test sets, which include named entity recognition, relation extraction, text classification, and question answering tasks, demonstrate that Taiyi achieves superior performance compared to general LLMs. The case study involving additional biomedical NLP tasks further shows Taiyi's considerable potential for bilingual biomedical multitasking.Conclusion Leveraging rich high-quality biomedical corpora and developing effective fine-tuning strategies can significantly improve the performance of LLMs within the biomedical domain. Taiyi shows the bilingual multitasking capability through supervised fine-tuning. However, those tasks such as information extraction that are not generation tasks in nature remain challenging for LLM-based generative approaches, and they still underperform the conventional discriminative approaches using smaller language models.
更多
查看译文
关键词
natural language processing,large language model,supervised fine-tuning,biomedical multitasking
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要