Structured Pruning for Efficient Convolutional Neural Networks via Incremental Regularization

IEEE Journal of Selected Topics in Signal Processing(2020)

引用 26|浏览59
暂无评分
摘要
Modern Convolutional Neural Networks (CNNs) are usually restricted by their massive computation and high storage. Parameter pruning is a promising approach for CNN compression and acceleration by eliminating redundant model parameters with tolerable performance degradation. Despite its effectiveness, existing regularization-based parameter pruning methods usually drive weights towards zero with large and constant regularization factors, which neglects the fragility of the expressiveness of CNNs, and thus calls for a more gentle regularization scheme so that the networks can adapt during pruning. To achieve this, we propose a novel regularization-based pruning method, named IncReg , to incrementally assign different regularization factors to different weights based on their relative importance. Empirical analysis on CIFAR-10 dataset verifies the merits of IncReg. Further extensive experiments with popular CNNs on CIFAR-10 and ImageNet datasets show that IncReg achieves comparable to even better results compared with state-of-the-arts. Moreover, to resolve the problem that column pruning cannot be directly applied to off-the-shelf deep learning libraries for acceleration, we generalize IncReg from column pruning to spatial pruning, which can equip existing structured pruning methods (such as channel pruning) for further acceleration with ignorable accuracy loss. Our source codes and trained models are available at: https://github.com/mingsun-tse/caffe_increghttps://github.com/mingsun-tse/caffe_increg .
更多
查看译文
关键词
Convolutional neural network,Model compression,Structured pruning,Incremental regularization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要