Smart-DNN plus : A Memory-efficient Neural Networks Compression Framework for the Model Inference

ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION(2023)

引用 0|浏览2
暂无评分
摘要
Deep Neural Networks (DNNs) have achieved remarkable success in various real-world applications. However, running a Deep Neural Network (DNN) typically requires hundreds of megabytes of memory footprints, making it challenging to deploy on resource-constrained platforms such as mobile devices and IoT. Although mainstream DNNs compression techniques such as pruning, distillation, and quantization can reduce the memory overhead of model parameters during DNN inference, they suffer from three limitations: (i) low model compression ratio for the lightweight DNN structures with little redundancy, (ii) potential degradation in model inference accuracy, and (iii) inadequate memory compression ratio is attributable to ignoring the layering property of DNN inference. To address these issues, we propose a lightweight memory-efficient DNN inference framework called Smart-DNN+, which significantly reduces the memory costs of DNN inference without degrading the model quality. Specifically, (1) Smart-DNN+ applies a layerwise binary-quantizer with a remapping mechanism to greatly reduce the model size by quantizing the typical floating-point DNN weights of 32-bit to the 1-bit signs layer by layer. To maintain model quality, (2) Smart-DNN+ employs a bucket-encoder to keep the compressed quantization error by encoding the multiple similar floating-point residuals into the same integer bucket IDs. When running the compressed DNN in the user's device, (3) Smart-DNN+ utilizes a partially decompressing strategy to greatly reduce the required memory overhead by first loading the compressed DNNs in memory and then dynamically decompressing the required materials for model inference layer by layer. Experimental results on popular DNNs and datasets demonstrate that Smart-DNN+ achieves lower 0.17%-0.92% memory costs at lower runtime overheads compared with the states of the art without degrading the inference accuracy. Moreover, Smart-DNN+ potentially reduces the inference runtime up to 2.04x that of conventional DNN inference workflow.
更多
查看译文
关键词
Model compression,quantization,neural network,memory overhead
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要