An Accuracy-Driven Compression Methodology to Derive Efficient Codebook-Based CNNs

2022 IEEE International Conference on Omni-layer Intelligent Systems (COINS)(2022)

引用 0|浏览5
暂无评分
摘要
Codebook-based optimizations are a class of algorithmic-level transformations able to effectively reduce the computing and memory requirements of Convolutional Neural Networks (CNNs). This approach tightly limits the number of unique weights in each layer, allowing the storage of employed values in codebooks containing a small number of floating-point entries. Then, CNN models are represented as low-bitwidth indexes of such codebooks. This work introduces a novel iterative methodology to find highly beneficial schemes trading off accuracy and model compression in codebook-based CNNs. Our strategy can retrieve non-uniform solutions driven by an accuracy constraint embedded in the optimization loop. Our results indicate that, for a 1% accuracy degradation, our methodology can compress baseline floating-point CNN models up to 19x. Moreover, by reducing the number of memory accesses, our strategy increases energy efficiency and improves inference performance by up to 91%.
更多
查看译文
关键词
CNN compression,Clustering,Ensembling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要