Revisiting Learned Image Compression With Statistical Measurement of Latent Representations


Cited 0|Views5
No score
Recent learned image compression models surpass manually designed methods in rate-distortion performance by introducing nonlinear transforms and end-to-end optimization. However, there still lack quantitative measurements that efficiently evaluate the latent representations inferred by learned image compression models. To address this problem, we develop novel measurements on robustness and importance of the latent representations. We first propose an admissible range that can be efficiently estimated via gradient ascent and descent for establishing the empirical distribution of latent representations. Consequently, the in-distribution region within the admissible range is derived to measure the robustness and channel importance of latent representations of natural images. Visualization demonstrates the statistics of latent representations are significantly distinguishing in the properties of robustness and linearity within and outside the in-distribution region. To our best knowledge, this paper proposes the first statistically meaningful measurements for learned image compression and successfully applies the measurements in corruption alleviation during successive image compression and post-training pruning in a training-free fashion. Compared with existing methods, the shrunk in-distribution constraint derived from the in-distribution region achieves superior robustness and rate-distortion performance in successive compression. The channel importance allows post-training pruning to achieve comparable rate-distortion performance with a reduction of up to 60% entropy coding time.
Translated text
Key words
Learned image compression,latent representation,robustness,interpretability,successive compression,post-training pruning
AI Read Science
Must-Reading Tree
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined