Bidirectional Multi-scale Deformable Attention for Video Super-Resolution

Multimedia Tools and Applications(2024)

引用 0|浏览2
暂无评分
摘要
Video super-resolution aims to generate a high-resolution video frame from its low-resolution video sequences. Video super-resolution is still a challenging problem due to performing the temporal frame alignment and spatial feature fusion during the process of spatial-temporal modeling. Existing deep learning based methods have limitations in handling accurate alignment and effective fusion of frames with multi-scale feature information. In this paper, we propose Bidirectional Multi-scale Deformable Attention (BMDA) for video Super-Resolution in terms of propagation, alignment and fusion. More specifically, the developed Deformable Alignment Module (DAM) in BMDA contains two kinds of modules: Multi-scale Deformable Convolution Module (MDCM) and Multi-scale Attention Module (MAM). MDCM is leveraged to deal with the offset information in different scales and align adjacent frames at the feature level, improving the robustness of the alignment among adjacent frames. MAM is designed to extract the local and global features of the aligned features for aggregation, such that the feature information compensation between pixels is achieved. Additionally, in order to make full use of shallow features, dense connection structure between each layer is adopted in the framework of bidirectional propagation to achieve better visual performance on video super-resolution. In particular, our proposed BDAM outperforms BasicVSR by up to 1.28dB in PSNR when batch size is set to 2. Experimental results on public video benchmark datasets demonstrate that the proposed method can achieve superior performance on large motion videos as compared with the state-of-the art methods.
更多
查看译文
关键词
Video super-resolution,Multi-scale deformable convolution,Multi-scale attention,Bidirectional propagation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要