Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages
CoRR(2024)
摘要
While large language models (LLMs) have been pre-trained on multilingual
corpora, their performance still lags behind in most languages compared to a
few resource-rich languages. One common approach to mitigate this issue is to
translate training data from resource-rich languages into other languages and
then continue training. However, using the data obtained solely relying on
translation while ignoring the original capabilities of LLMs across languages
is not always effective, which we show will limit the performance of
cross-lingual knowledge transfer. In this work, we propose SDRRL, a method
based on Self-Distillation from Resource-Rich Languages that effectively
improve multilingual performance by leveraging the internal capabilities of
LLMs on resource-rich languages. We evaluate on different LLMs (LLaMA-2 and
SeaLLM) and source languages across various comprehension and generation tasks,
experimental results demonstrate that SDRRL can significantly enhance
multilingual capabilities while minimizing the impact on original performance
in resource-rich languages.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要