Learning to Rank Patches for Unbiased Image Redundancy Reduction
CVPR 2024(2024)
摘要
Images suffer from heavy spatial redundancy because pixels in neighboring
regions are spatially correlated. Existing approaches strive to overcome this
limitation by reducing less meaningful image regions. However, current leading
methods rely on supervisory signals. They may compel models to preserve content
that aligns with labeled categories and discard content belonging to unlabeled
categories. This categorical inductive bias makes these methods less effective
in real-world scenarios. To address this issue, we propose a self-supervised
framework for image redundancy reduction called Learning to Rank Patches
(LTRP). We observe that image reconstruction of masked image modeling models is
sensitive to the removal of visible patches when the masking ratio is high
(e.g., 90%). Building upon it, we implement LTRP via two steps: inferring the
semantic density score of each patch by quantifying variation between
reconstructions with and without this patch, and learning to rank the patches
with the pseudo score. The entire process is self-supervised, thus getting out
of the dilemma of categorical inductive bias. We design extensive experiments
on different datasets and tasks. The results demonstrate that LTRP outperforms
both supervised and other self-supervised methods due to the fair assessment of
image content.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要