Low power Convolutional Neural Networks on a chip

2016 IEEE International Symposium on Circuits and Systems (ISCAS)(2016)

引用 66|浏览215
暂无评分
摘要
Deep learning, and especially Convolutional Neural Network (CNN, is among the most powerful and widely used techniques in computer vision. Applications range from image classification to object detection, segmentation, Optical Character Recognition (OCR), etc. At the same time, CNNs are both computationally intensive and memory intensive, making them difficult to be deployed on low power lightweight embedded systems. In this work, we introduce an on-chip convoltional neural network implementation for low-power embedded system. We point out that the high precision of weights limits the low-power CNN implementation on both FPGA and RRAM platform. A dynamic quantization method is introduced to reduce the precision while maintaining the same or comparable accuracy at the same time. Finally, the de ailed designs of low-power FPGA-based CNN and RRAM-based CNN are provided and compared. The results show that FPGA-based design gets 2× energy efficiency compared with GPU implementation, and toe RRAM-based design can further obtain more than 40× energy efficiency gains.
更多
查看译文
关键词
convolutional neural networks on a chip,low-power CNN implementation,computer vision,RRAM-based design,GPU implementation,energy efficiency,FPGA-based design,RRAM-based CNN,FPGA-based CNN,field programmable gate arrays,dynamic quantization method,on-chip convoltional neural network implementation,low power lightweight embedded systems,OCR,optical character recognition,object detection,image classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要