We proposed a new variable-rate image compression framework, by applying generalized octave convolutions and generalized octave transposed-convolutions layers and incorporating the octave-based shortcut connections
Block partition brings the possibility of acceleration, contextual prediction module effectively utilizes the correlation between blocks to improve coding efficiency, and boundary-aware post-processing module takes into account block effect to improve subjective and objective qua...
This paper detailed an end-to-end framework enabling efficient image compression for remote machine task analysis, using a chain composed of a compression module and a task algorithm that can be optimized end-to-end
We describe an end-to-end trainable model for image compression based on variational autoencoders. The model incorporates a hyperprior to effectively capture spatial dependencies in the latent representation. This hyperprior relates to side information, a concept universal to vir...
We showed in Figure 2 that a Gaussian mixture model-based entropy model provides a net benefit and outperforms the simpler Gaussian scale mixture-based model in terms of rate–distortion performance without increasing the asymptotic complexity of the model
We demonstrated that constraining the application domain to street scene images leads to additional storage savings, and explored selectively combining fully synthesized image contents with preserved ones when semantic label maps are available
Hardware chips optimized for convolutional neural networks are likely to be widely available soon, given that these networks are key to good performance in so many applications
We presented a general architecture for compressing with recurrent neural network, content-based residual scaling, and a new variation of Gated Recurrent Unit [3], which provided the highest Peak Signal to Noise Ratio-HVS out of the models trained on the high entropy dataset
We have presented a complete image compression method based on nonlinear transform coding, and a framework to optimize it end-to-end for rate–distortion performance
We present a machine learning-based approach to lossy image compression which outperforms all existing codecs, while running in real-time. Our algorithm typically produces files 2.5 times smaller than JPEG and JPEG 2000, 2 times smaller than WebP, and 1.7 times smaller than BPG o...
We propose a general framework for variable-rate image compression and a novel architecture based on convolutional and deconvolutional Long short-term memory recurrent networks
The last step is combined of decoding the zigzag order and recreating the 8 x 8 blocks.The inverse discrete cosine transform(IDCT) takes each value in the spatial domain and examines the contributions that each of the 64 frequency values make to that pixel.
Signal & Image Processing : An International Journal, no. 5 (2012): 19-28
Data is compressed by reducing its redundancy, but this also makes the data less reliable, more prone to errors. In this paper a novel approach of image compression based on a new method that has been created for image compression which is called Five Modulus Method (FMM). The ...
The reconstructed images have better quality and less blocking artifacts when compared to the BAS-2008 algorithm. This correspondence introduced an approximation algorithm for the discrete cosine transform computation based on matrix polar decomposition
The rapid increase in the range and use of electronic imaging justifies attention for systematic design of an image compression system and for providing the image quality needed in different applications