Structured Pruning for Efficient Convolutional Neural Networks via Incremental Regularization

IEEE Journal of Selected Topics in Signal Processing(2020)

Cited 26|Views64
No score
Abstract
Modern Convolutional Neural Networks (CNNs) are usually restricted by their massive computation and high storage. Parameter pruning is a promising approach for CNN compression and acceleration by eliminating redundant model parameters with tolerable performance degradation. Despite its effectiveness, existing regularization-based parameter pruning methods usually drive weights towards zero with large and constant regularization factors, which neglects the fragility of the expressiveness of CNNs, and thus calls for a more gentle regularization scheme so that the networks can adapt during pruning. To achieve this, we propose a novel regularization-based pruning method, named IncReg , to incrementally assign different regularization factors to different weights based on their relative importance. Empirical analysis on CIFAR-10 dataset verifies the merits of IncReg. Further extensive experiments with popular CNNs on CIFAR-10 and ImageNet datasets show that IncReg achieves comparable to even better results compared with state-of-the-arts. Moreover, to resolve the problem that column pruning cannot be directly applied to off-the-shelf deep learning libraries for acceleration, we generalize IncReg from column pruning to spatial pruning, which can equip existing structured pruning methods (such as channel pruning) for further acceleration with ignorable accuracy loss. Our source codes and trained models are available at: https://github.com/mingsun-tse/caffe_increghttps://github.com/mingsun-tse/caffe_increg .
More
Translated text
Key words
Convolutional neural network,Model compression,Structured pruning,Incremental regularization
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined