Continuous Pruning of Deep Convolutional Networks Using Selective Weight Decay.

CoRR(2020)

引用 0|浏览0
暂无评分
摘要
During the last decade, deep convolutional networks have become the reference for many machine learning tasks, especially in computer vision. However, large computational needs make them hard to deploy on resource-constrained hardware. Pruning has emerged as a standard way to compress such large networks. Yet, the severe perturbation caused by most pruning approaches is thought to hinder their efficacy. Drawing inspiration from Lagrangian Smoothing, we introduce a new technique, Selective Weight Decay (SWD), which achieves continuous pruning throughout training. Our approach deviates significantly from most methods of the literature as it relies on a principle that can be applied in many different ways, for any problem, network or pruning structure. We show that SWD compares favorably to other approaches in terms of performance/parameters ratio on the CIFAR-10 and ImageNet ILSVRC2012 datasets. On CIFAR-10 and unstructured pruning, with a parameters target of 0.1%, SWD attains a Top-1 accuracy of 81.32% while the reference method only reaches 27.78%. On CIFAR-10 and structured pruning, with a parameters target of 2.5%, the reference technique drops at 10% (random guess) while SWD maintains the Top-1 accuracy at 93.22%. On the ImageNet ILSVRC2012 dataset with unstructured pruning, for a parameters targer of 2.5%, SWD attains 84.6% Top-5 accuracy instead of the 77.07% reached by the reference.
更多
查看译文
关键词
deep convolutional networks,selective weight decay,pruning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要