PermCNN: Energy-Efficient Convolutional Neural Network Hardware Architecture With Permuted Diagonal Structure

Periodicals(2021)

引用 20|浏览56
暂无评分
摘要
AbstractIn the emerging artificial intelligence (AI) era, efficient hardware accelerator design for deep neural networks (DNNs) is very important to enable real-time energy-efficient DNN model deployment. To this end, various DNN model compression approaches and the corresponding hardware architectures have been intensively investigated. Recently, PermDNN, as a permuted diagonal structure-imposing model compression approach, was proposed with promising classification performance and hardware performance. However, the existing PermDNN hardware architecture is specifically designed for fully-connected (FC) layer-contained DNN models; while its support for convolutional (CONV) layer is missing. To fill this gap, this article proposes PermCNN, an energy-efficient hardware architecture for permuted diagonal structured convolutional neural networks (CNNs). By fully utilizing the strong structured sparsity in the trained models as well as dedicatedly leveraging the dynamic activation sparsity, PermCNN delivers very high hardware performance for inference tasks on CNN models. A design example with 28 nm CMOS technology shows that, compared the to state-of-the-art CNN accelerator, PermCNN achieves 3.74× and 3.11× improvement on area and energy efficiency, respectively, on AlexNet workload, and 17.49× and 14.22× improvement on area and energy efficiency, respectively, on VGG model. After including energy consumption incurred by DRAM access, PermCNN achieves 2.60× and 9.62× overall energy consumption improvement on AlexNet and VGG workloads, respectively.
更多
查看译文
关键词
Deep learning, model compression, hardware accelerator, convolutional neural network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要