Online Knowledge Distillation via Collaborative Learning

CVPR(2020)

引用 279|浏览503
暂无评分
摘要
This work presents an efficient yet effective online Knowledge Distillation method via Collaborative Learning, termed KDCL, which is able to consistently improve the generalization ability of deep neural networks (DNNs) that have different learning capacities. Unlike existing two-stage knowledge distillation approaches that pre-train a DNN with large capacity as the \"teacher\" and then transfer the teacher\u0027s knowledge to another \"student\" DNN unidirectionally (i.e. one-way), KDCL treats all DNNs as \"students\" and collaboratively trains them in a single stage (knowledge is transferred among arbitrary students during collaborative training), enabling parallel computing, fast computations, and appealing generalization ability. Specifically, we carefully design multiple methods to generate soft target as supervisions by effectively ensembling predictions of students and distorting the input images. Extensive experiments show that KDCL consistently improves all the \"students\" on different datasets, including CIFAR-100 and ImageNet. For example, when trained together by using KDCL, ResNet-50 and MobileNetV2 achieve 78.2% and 74.0% top-1 accuracy on ImageNet, outperforming the original results by 1.4% and 2.0% respectively. We also verify that models pre-trained with KDCL transfer well to object detection and semantic segmentation on MS COCO dataset. For instance, the FPN detector is improved by 0.9% mAP.
更多
查看译文
关键词
collaborative learning,online knowledge distillation,deep neural networks,DNN,two-stage knowledge distillation,collaborative training,parallel computing,CIFAR-100,ImageNet,KDCL,object detection,semantic segmentation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要