Channel Attention for No-Reference Image Quality Assessment in DCT domain

Zesheng Wang, Liang Yuan,Guangtao Zhai

IEEE Signal Processing Letters(2024)

引用 0|浏览0
暂无评分
摘要
Attention mechanism, especially self-attention, has gained great success in image quality assessment. The advent of Transformer has led to a substantial enhancement in noreference image quality assessment (NR-IQA). Existing works focus on leveraging the global perceptual capability of Transformer encoders to perceive image quality. In this work, we start from a different view and propose a novel multi-frequency channel attention framework for Transformer encoder. Through frequency analysis, we demonstrate mathematically that traditional global average pooling (GAP) is a specific instance of feature decomposition in the frequency domain. With the proof, we use the discrete cosine transform to compress channels, which optimally compresses channels by efficiently utilizing frequency components overlooked by GAP. The experimental results show that the proposed method leads to improvements of performance over the state-of-the-art methods.
更多
查看译文
关键词
channel attention,discrete cosine transform,image quality assessment,Transformer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要