Iteratively Training Look-Up Tables for Network Quantization

IEEE Journal of Selected Topics in Signal Processing(2020)

引用 13|浏览58
暂无评分
摘要
Operating deep neural networks (DNNs) on devices with limited resources requires the reduction of their memory as well as computational footprint. Popular reduction methods are network quantization or pruning, which either reduce the word length of the network parameters or remove weights from the network if they are not needed. In this article, we discuss a general framework for network reduction which we call Look-Up Table Quantization (LUT-Q). For each layer, we learn a value dictionary and an assignment matrix to represent the network weights. We propose a special solver which combines gradient descent and a one-step k-means update to learn both the value dictionaries and assignment matrices iteratively. This method is very flexible: by constraining the value dictionary, many different reduction problems such as non-uniform network quantization, training of multiplierless networks, network pruning, or simultaneous quantization and pruning can be implemented without changing the solver. This flexibility of the LUT-Q method allows us to use the same method to train networks for different hardware capabilities.
更多
查看译文
关键词
Neural network compression,network quantization,look-up table quantization,weight tying,multiplier-less networks,multiplier-less batch normalization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要