MixKD: Towards Efficient Distillation of Large-scale Language Models

ICLR(2021)

引用 54|浏览328
暂无评分
摘要
Large-scale language models have demonstrated impressive empirical performance in recent years. Nevertheless, the improved results are attained at the price of bigger size, more power consumption, and slower inference, which hinder their applicability to low-resource (memory and computation) platforms. Knowledge distillation (KD) has been demonstrated as an effective framework for compressing such big models. However, large-scale neural network systems are prone to memorizing training instances, and thus tend to make inconsistent predictions when the data distribution is slightly altered. Moreover, the student model has few opportunities to request useful information from teacher model when there is limited task-specific data available. To address these issues, we propose MixKD, a data-agnostic distillation framework that leverages Mixup, a simple yet efficient data augmentation approach, to endow the resulting model with stronger generalization ability. Concretely, in addition to the original training examples, the student model is encouraged to mimic teacher\u0027s behaviour on the linear interpolations of example pairs as well. We prove, from a theoretical perspective, that MixKD gives rise to a smaller gap between the generalization error and the empirical error. To verify its effectiveness, we conduct extensive experiments on the GLUE benchmark, where MixKD consistently leads to significant gains over the standard KD training, and outperforms several competitive baselines. Experiments under a limited-data setting and ablation studies further demonstrate the advantages of the proposed approach.
更多
查看译文
关键词
efficient distillation,models,mixkd,language,large-scale
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要