SNPF: Sensitiveness-Based Network Pruning Framework for Efficient Edge Computing.

Yiheng Lu,Ziyu Guan, Wei Zhao ,Maoguo Gong, Wenjing Wang ,Kai Sheng

IEEE Internet Things J.(2024)

引用 0|浏览4
暂无评分
摘要
Convolutional neural networks (CNNs) are used comprehensively in the field of the Internet of Things (IoTs), such as mobile phones, surveillance, and satellite. However, the deployment of CNNs is difficult because the structure of hand-designed networks is complicated. Therefore, we propose a sensitiveness based network pruning framework (SNPF) to reduce the size of original networks to save computation resources. SNPF will evaluate the importance of each convolutional layer by the reconstruction of inference accuracy when we add extra noise to the original model, and then remove filters in terms of the degree of sensitiveness for each layer. Compared with previous weight-norm based pruning methods such as “l1-norm”“, BatchNorm-Pruning”, and “Taylor-Pruning”, SNPF is robust to the update of parameters, which can avoid the inconsistency of evaluation for filters if the parameters of the pre-trained model are not fully optimized. Namely, SNPF can prune the network at the early training stage to save computation resources. We test our method on three prevalent models of VGG-16, ResNet-18, ResNet-50 and a customized Conv-4 with 4 convolutional layers. They are then tested on CIFAR-10, CIFAR-100, ImageNet, and MNIST, respectively. Impressively, we observe that even when the VGG-16 is only trained with 50 epochs, we can get the same evaluation of layer importance as the results when the model is fully trained. Additionally, we can also achieve comparable pruning results to previous weight-oriented methods on the other three models.
更多
查看译文
关键词
Convolutional neural network,network pruning,sensitiveness,Internet of Things
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要