Exploring The Limits Of Simple Learners In Knowledge Distillation For Document Classification With Docbert

5TH WORKSHOP ON REPRESENTATION LEARNING FOR NLP (REPL4NLP-2020)(2020)

引用 26|浏览82
暂无评分
摘要
Fine-tuned variants of BERT are able to achieve state-of-the-art accuracy on many natural language processing tasks, although at significant computational costs. In this paper, we verify BERT's effectiveness for document classification and investigate the extent to which BERT-level effectiveness can be obtained by different baselines, combined with knowledge distillation-a popular model compression method. The results show that BERT-level effectiveness can be achieved by a single-layer LSTM with at least 40x fewer FLOPS and only similar to 3% parameters. More importantly, this study analyzes the limits of knowledge distillation as we distill BERT's knowledge all the way down to linear models-a relevant baseline for the task. We report substantial improvement in effectiveness for even the simplest models, as they capture the knowledge learnt by BERT.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要