Chrome Extension
WeChat Mini Program
Use on ChatGLM

Post-Training Quantization for Vision Transformer in Transformed Domain

2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME(2023)

Peng Cheng Lab | Chinese Acad Sci | Nanyang Technol Univ

Cited 0|Views10
Abstract
As a successor to convolutional neural networks (CNNs), transformer-based models have achieved great performance in computer vision tasks. Compressing vision transformers to low-bit brings a number of practical benefits, including higher inference speed, improved memory footprint, and reduced energy consumption. Existing model compression methods, especially quantization techniques, ignore the joint statistics of weights, resulting in sub-optimal task performance at a given quantization bit rate. In this paper, we propose to apply a transform before quantization to decorrelate vision transformer’s weights. And the entire compression flow is optimized in a rate-distortion framework to minimize the network output errors instead of simply optimizing for quantization errors or layer-wise output errors. Extensive experimental results on a variety of vision transformers (e.g. Swin, ViT and DeiT) demonstrate that our proposed method outperforms the state-of-the-art. It can quantize vision transformers (e.g. Swin, ViT and DeiT) on both weights and activations to 6-bit without a significant accuracy drop.
More
Translated text
Key words
Vision transformer,post-training quantization,transform,model compression
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Related Papers
A Romero,N Ballas, SE Kahou, A Chassang,C Gatta,Y Bengio
2015

被引用24762 | 浏览

Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:论文研究了通过制备核心-壳结构的CaCu3Ti4O12@TiO2纳米纤维来增强聚酰亚胺基复合材料的介电性能,实现了高能量密度的优化。

方法】:采用同轴电纺技术结合溶胶前驱体,制备核心-壳结构的CCTO@TiO2纳米纤维,并通过溶液浇铸法制备聚酰亚胺纳米复合膜。

实验】:利用透射电子显微镜确认了CCTO@TiO2纳米纤维的核心-壳结构,实验结果表明,使用5 vol%的CCTO@TiO2纳米纤维的复合材料,介电常数从5.55提升到5.85,击穿强度从201 kV/mm增加到236 kV/mm,介电损耗无明显变化;填充1 vol% CCTO@TiO2纳米纤维的聚酰亚胺复合膜展现出最大能量密度为1.6 J/cm³。