Class-Separation Preserving Pruning for Deep Neural Networks.

IEEE Transactions on Artificial Intelligence(2024)

引用 0|浏览5
暂无评分
摘要
Neural network pruning has been deemed essential in the deployment of deep neural networks on resource-constrained edge devices, greatly reducing the number of network parameters without drastically compromising accuracy. A class of techniques proposed in the literature assigns an importance score to each parameter and prunes those of the least importance. However, most of these methods are based on generalized estimations of the importance of each parameter, ignoring the context of the specific task at hand. In this article, we propose a task specific pruning approach, CSPrune, which is based on how efficiently a neuron or a convolutional filter is able to separate classes. Our axiomatic approach assigns an importance score based on how separable different classes are in the output activations or feature maps, preserving the separation of classes which avoids the reduction in classification accuracy. Additionally, most pruning algorithms prune individual connections or weights leading to a sparse network without taking into account whether the hardware the network is deployed on can take advantage of that sparsity or not. CSPrune prunes whole neurons or filters which results in a more structured pruned network whose sparsity can be more efficiently utilized by the hardware. We evaluate our pruning method against various benchmark datasets, both small and large, and network architectures and show that our approach outperforms comparable pruning techniques.
更多
查看译文
关键词
Class-separation score (CSS),deep neural networks (DNNs),pruning,structured pruning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要