QL-Net: Quantized-by-LookUp CNN

2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV)(2018)

引用 0|浏览73
暂无评分
摘要
Convolutional Neural Networks (CNNs) have achieved a state-of-the-art performance in the different computer vision tasks. However, CNN algorithms are computationally and power intensive, which makes them difficult to run on wearable and embedded systems. One way to address this constraint is to reduce the number of computational operations performed. Recently, several approaches addressed the problem of the computational complexity in the CNNs. Most of these methods, however, require a dedicated hardware. We propose a new method for the computation reduction in CNNs that substitutes Multiply and Accumulate (MAC) operations with a codebook lookup and can be executed on the generic hardware. The proposed method called QL-Net combines several concepts: (i) a codebook construction, (ii) a layer-wise retraining strategy, and (iii) a substitution of the MAC operations with the lookup of the convolution responses at inference time. The proposed QL-Net achieves a 98.6% accuracy on the MNIST dataset with a 5.8x reduction in runtime, when compared to MAC-based CNN model that achieved a 99.2% accuracy.
更多
查看译文
关键词
quantized-by-lookup CNN,wearable systems,embedded systems,computational operations,computational complexity,dedicated hardware,computation reduction,codebook lookup,generic hardware,QL-Net combines several concepts,codebook construction,layer-wise retraining strategy,MAC operations,convolution responses,QL-Net achieves,MAC-based CNN model,convolutional neural networks,computer vision tasks,accumulate operations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要