Continual Knowledge Distillation for Neural Machine Translation

conf_acl(2023)

引用 0|浏览121
暂无评分
摘要
While many parallel corpora are not publicly accessible for data copyright, data privacy and competitive differentiation reasons, trained translation models are increasingly available on open platforms. In this work, we propose a method called continual knowledge distillation to take advantage of existing translation models to improve one model of interest. The basic idea is to sequentially transfer knowledge from each trained model to the distilled model. Extensive experiments on Chinese-English and German-English datasets show that our method achieves significant and consistent improvements over strong baselines under both homogeneous and heterogeneous trained model settings and is robust to malicious models.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络