Structured Term Pruning for Computational Efficient Neural Networks Inference

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems(2023)

引用 4|浏览35
暂无评分
摘要
The state-of-the-art convolutional neural network accelerators are showing a growing interest in exploiting the bit-level sparsity and eliminating the ineffectual computations of zero bits. However, the excessive redundancy and the irregular distribution of nonzero bits limit the real speedup in the accelerators. To address this, we propose an algorithm-architecture codesign, named structured term pruning (STP), to boost the computation efficiency of neural networks inference. Specifically, we enhance the bit sparsity by guiding the weights toward the value with fewer power-of-two terms. Then, we structure the terms with layer-wise group budgets. Retraining is adopted to recover the accuracy drop. We also design the hardware of the group processing element and the fast signed-digital encoder for efficient implementation of STP networks. The system design of STP is realized with some easy alterations on an input stationary systolic array design. Extensive evaluation results demonstrate that STP can reduce significant inference computation costs, and achieve $2.35\times $ computational energy saving for the ResNet18 network on the ImageNet dataset.
更多
查看译文
关键词
Algorithm-architecture codesign,compression and acceleration,neural networks,quantization,systolic array (SA)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要