KBioXLM: A Knowledge-anchored Biomedical Multilingual Pretrained Language Model.
CoRR(2023)
摘要
Most biomedical pretrained language models are monolingual and cannot handle
the growing cross-lingual requirements. The scarcity of non-English domain
corpora, not to mention parallel data, poses a significant hurdle in training
multilingual biomedical models. Since knowledge forms the core of
domain-specific corpora and can be translated into various languages
accurately, we propose a model called KBioXLM, which transforms the
multilingual pretrained model XLM-R into the biomedical domain using a
knowledge-anchored approach. We achieve a biomedical multilingual corpus by
incorporating three granularity knowledge alignments (entity, fact, and passage
levels) into monolingual corpora. Then we design three corresponding training
tasks (entity masking, relation masking, and passage relation prediction) and
continue training on top of the XLM-R model to enhance its domain cross-lingual
ability. To validate the effectiveness of our model, we translate the English
benchmarks of multiple tasks into Chinese. Experimental results demonstrate
that our model significantly outperforms monolingual and multilingual
pretrained models in cross-lingual zero-shot and few-shot scenarios, achieving
improvements of up to 10+ points. Our code is publicly available at
https://github.com/ngwlh-gl/KBioXLM.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要