Differentiable Joint Pruning and Quantization for Hardware Efficiency
European Conference on Computer Vision(2020)
摘要
We present a differentiable joint pruning and quantization (DJPQ) scheme. We frame neural network compression as a joint gradient-based optimization problem, trading off between model pruning and quantization automatically for hardware efficiency. DJPQ incorporates variational information bottleneck based structured pruning and mixed-bit precision quantization into a single differentiable loss function. In contrast to previous works which consider pruning and quantization separately, our method enables users to find the optimal trade-off between both in a single training procedure. To utilize the method for more efficient hardware inference, we extend DJPQ to integrate structured pruning with power-of-two bit-restricted quantization. We show that DJPQ significantly reduces the number of Bit-Operations (BOPs) for several networks while maintaining the top-1 accuracy of original floating-point models (e.g., 53\(\times \) BOPs reduction in ResNet18 on ImageNet, 43\(\times \) in MobileNetV2). Compared to the conventional two-stage approach, which optimizes pruning and quantization independently, our scheme outperforms in terms of both accuracy and BOPs. Even when considering bit-restricted quantization, DJPQ achieves larger compression ratios and better accuracy than the two-stage approach.
更多查看译文
关键词
Joint optimization,Model compression,Mixed precision,Bit-restriction,Variational information bottleneck,Quantization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络