Learning Tucker Compression for Deep CNN

Pengyi Hao, Xiaojuan Li,Fuli Wu

2022 Data Compression Conference (DCC)(2022)

引用 1|浏览16
暂无评分
摘要
Recently, tensor decomposition approaches are used to compress deep convolutional neural networks (CNN) for getting a faster CNN with fewer parameters. However, there are two problems of tensor decomposition based CNN compression approaches, one is that they usually decompose CNN layer by layer, ignoring the correlation between layers, the other is that training and compressing a CNN is separated, easily leading to local optimum of ranks. In this paper, Learning Tucker Compression (LTC) is proposed. It gets the best tucker ranks by jointly optimizing of CNN's loss function and Tucker's cost function, which means that training and compressing is carried out at the same time. It can directly optimize the CNN without decomposing the whole network layer by layer and can directly fine-tune the whole network without using fixed parameters. LTC is verified on two public datasets. Experiments show that LTC can make a network like ResNet, VGG faster with nearly the same classification accuracy, which surpasses current tensor decomposition approaches.
更多
查看译文
关键词
deep CNN,deep convolutional neural networks,faster CNN,CNN compression approaches,LTC,CNN loss function,tensor decomposition approach,Tucker cost function,learning Tucker compression
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要