IMPQ: Reduced Complexity Neural Networks Via Granular Precision Assignment

ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2022)

引用 0|浏览17
暂无评分
摘要
The demand for the deployment of deep neural networks (DNN) on resource-constrained Edge platforms is ever increasing. Today’s DNN accelerators support mixed-precision computations to enable reduction of computational and storage costs but require networks with precision at variable granularity, i.e., network, layer or kernel level. However, the problem of granular precision assignment is challenging due to an exponentially large search space and efficient methods for such precision assignment are lacking. To address this problem, we introduce the iterative mixed-precision quantization (IMPQ) framework to allocate precision at variable granularity. IMPQ employs a sensitivity metric to order the weight/activation groups in terms of the likelihood of misclassifying input samples due to its quantization noise. It iteratively reduces the precision of the weights and activations of a pretrained full-precision network starting with the least sensitive group. Compared to state-of-the-art methods, IMPQ reduces computational costs by 2× -to-2.5× for compact networks such as MobileNet-V1 on ImageNet with no accuracy loss. Our experiments reveal that kernel-wise granular precision assignment provides 1.7× higher compression than layer-wise assignment.
更多
查看译文
关键词
mixed-precision,DNN,quantization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要