Rethinking the Importance of Quantization Bias, Toward Full Low-Bit Training

IEEE Transactions on Image Processing(2022)

引用 4|浏览34
暂无评分
摘要
Quantization is a promising technique to reduce the computation and storage costs of DNNs. Low-bit ( $\leq8$ bits) precision training remains an open problem due to the difficulty of gradient quantization. In this paper, we find two long-standing misunderstandings of the bias of gradient quantization noise. First, the large bias of gradient quantization noise, instead of the variance, is the key factor of training accuracy loss. Second, the widely used stochastic rounding cannot solve the training crash problem caused by the gradient quantization bias in practice. Moreover, we find that the asymmetric distribution of gradients causes a large bias of gradient quantization noise. Based on our findings, we propose a novel adaptive piecewise quantization method to effectively limit the bias of gradient quantization noise. Accordingly, we propose a new data format, Piecewise Fixed Point (PWF), to present data after quantization. We apply our method to different applications including image classification, machine translation, optical character recognition, and text classification. We achieve approximately $1.9\sim 3.5\times $ speedup compared with full precision training with an accuracy loss of less than 0.5%. To the best of our knowledge, this is the first work to quantize gradients of all layers to 8 bits in both large-scale CNN and RNN training with negligible accuracy loss.
更多
查看译文
关键词
Neural network acceleration,low precision training,quantization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要