Chrome Extension
WeChat Mini Program
Use on ChatGLM

Enhanced Lightweight Network with CNN and Improved Transformer for Image Super-Resolution.

ICNC-FSKD(2023)

Cited 0|Views12
No score
Abstract
In recent years, there have been significant advancements in deep learning-based lightweight image super-resolution (SR) reconstruction techniques. However, in practical applications, there are still challenges to be addressed. Many existing lightweight SR reconstruction algorithms simplify the model by reducing the number of model parameters or changing the combination of convolutions, which results in significantly decreased model performance and stability, leading to poor reconstruction results. To address this issue, we proposed a Cosine Self-Attention mechanism and deepens the network depth to improve the Swin Transformer, enhanced the model’s performance and stability. Experimental results show that the proposed approach achieves stronger and more stable reconstruction performance with lower parameter count than existing lightweight SR models, outperforming them in reconstruction results and model complexity. The PSNR/SSIM of our proposed method for ×2 and ×4 SR reconstruction on the Set5, Set14, BSD100, Urban100, and Manga109 datasets exceed those of most existing lightweight SR models.
More
Translated text
Key words
Deep learning,Image super-resolution,Attention mechanism,Swin Transformer
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined