When the Student Becomes the Master: Learning Better and Smaller Monolingual Models from mBERT.

International Conference on Computational Linguistics(2022)

引用 1|浏览1
暂无评分
摘要
In this research, we present pilot experiments to distil monolingual models from a jointly trained model for 102 languages (mBERT). We demonstrate that it is possible for the target language to outperform the original model, even with a basic distillation setup. We evaluate our methodology for 6 languages with varying amounts of resources and language families.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要