Integer-only Zero-shot Quantization for Efficient Speech Recognition

Sehoon Kim,Amir Gholami,Zhewei Yao, Nicholas Lee, Patrick Wang, Aniruddha Nrusimha,Bohan Zhai, Tianren Gao,Michael W. Mahoney,Kurt Keutzer

ArXiv(2022)

引用 6|浏览6
暂无评分
摘要
End-to-end neural network models achieve improved performance on various automatic speech recognition (ASR) tasks. However, these models perform poorly on edge hardware due to large memory and computation requirements. While quantizing model weights and/or activations to low-precision can be a promising solution, previous research on quantizing ASR models is limited. In particular, the previous approaches use floating-point arithmetic during inference and thus they cannot fully exploit efficient integer processing units. Moreover, they require training and/or validation data during quantization, which may not be available due to security or privacy concerns. To address these limitations, we propose an integer-only, zero-shot quantization scheme for ASR models. In particular, we generate synthetic data whose runtime statistics resemble the real data, and we use it to calibrate models during quantization. We apply our method to quantize QuartzNet, Jasper, and Conformer and show negligible WER degradation as compared to the full-precision baseline models, even without using any data. Moreover, we achieve up to 2.35x speedup on a T4 GPU and 4x compression rate, with a modest WER degradation of <1% with INT8 quantization.
更多
查看译文
关键词
Automatic speech recognition,quantization,compression,integer-only,efficient inference
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要