Effective Model Compression via Stage-wise Pruning

MACHINE INTELLIGENCE RESEARCH(2023)

引用 0|浏览11
暂无评分
摘要
Automated machine learning (AutoML) pruning methods aim at searching for a pruning strategy automatically to reduce the computational complexity of deep convolutional neural networks (deep CNNs). However, some previous work found that the results of many Auto-ML pruning methods cannot even surpass the results of the uniformly pruning method. In this paper, the ineffectiveness of Auto-ML pruning, which is caused by unfull and unfair training of the supernet, is shown. A deep supernet suffers from unfull training because it contains too many candidates. To overcome the unfull training, a stage-wise pruning (SWP) method is proposed, which splits a deep supernet into several stage-wise supernets to reduce the candidate number and utilize inplace distillation to supervise the stage training. Besides, a wide supernet is hit by unfair training since the sampling probability of each channel is unequal. Therefore, the fullnet and the tinynet are sampled in each training iteration to ensure that each channel can be overtrained. Remarkably, the proxy performance of the subnets trained with SWP is closer to the actual performance than that of most of the previous AutoML pruning work. Furthermore, experiments show that SWP achieves the state-of-the-art in both CIFAR-10 and ImageNet under the mobile setting.
更多
查看译文
关键词
Automated machine learning (AutoML),channel pruning,model compression,distillation,convolutional neural networks (CNN)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要