Attentive frequency learning network for super-resolution

Applied Intelligence(2021)

引用 4|浏览18
暂无评分
摘要
Benefiting from the strong capability of capturing long-range dependencies, a series of self-attention based single image super-resolution (SISR) methods have achieved promising performance. However, the existing self-attention mechanisms generally suffer great computational costs both in training and inference. In this study, we propose an innovative attentive frequency learning network (AFLN) for single image super-resolution. Our AFLN can greatly reduce computational costs of self-attention mechanism yet well capture long-range dependencies in SISR tasks. Specifically, our AFLN mainly consists of a series of extensive attentive frequency learning blocks (AFLB). In each AFLB, we firstly integrate the hierarchical features by residual dense connections and decompose the original features into low- and high-frequency domains with a half size of original features via discrete wavelet transform (DWT). Then, we adopt self-attention to explore long-range dependency relations in low- and high-frequency feature domains, respectively. In this way, we can model the self-attention in the quarter size of original input image, greatly reducing computational costs. In addition, the separating attention from low- and high-frequency domain can effectively maintain detailed information. Finally, we adopt the inverse discrete wavelet transform (IDWT) to reconstruct these attentive features. Extensive experiments on publicly available datasets demonstrate the efficiency and effectiveness of our AFLN against the state-of-the-art methods.
更多
查看译文
关键词
Super-resolution, Self-attention, Wavelet transform, Frequency domain
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要