Self-Calibrated Efficient Transformer for Lightweight Super-Resolution
IEEE Conference on Computer Vision and Pattern Recognition(2022)
摘要
Recently, deep learning has been successfully applied to the single-image super-resolution (SISR) with remarkable performance. However, most existing methods focus on building a more complex network with a large number of layers, which can entail heavy computational costs and memory storage. To address this problem, we present a lightweight Self-Calibrated Efficient Transformer (SCET) network to solve this problem. The architecture of SCET mainly consists of the self-calibrated module and efficient transformer block, where the self-calibrated module adopts the pixel attention mechanism to extract image features effectively. To further exploit the contextual information from features, we employ an efficient transformer to help the network obtain similar features over long distances and thus recover sufficient texture details. We provide comprehensive results on different settings of the overall net-work. Our proposed method achieves more remarkable performance than baseline methods. The source code and pre-trained models are available at https://github.com/AlexZou14/SCET.
更多查看译文
关键词
lightweight super resolution,deep learning,single image super resolution,complex network,heavy computational costs,memory storage,transformer block,pixel attention mechanism,image features,transformer network,self calibrated efficient transformer,SISR,SCET
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络