Language-Driven Visual Consensus for Zero-Shot Semantic Segmentation
CoRR(2024)
摘要
The pre-trained vision-language model, exemplified by CLIP, advances
zero-shot semantic segmentation by aligning visual features with class
embeddings through a transformer decoder to generate semantic masks. Despite
its effectiveness, prevailing methods within this paradigm encounter
challenges, including overfitting on seen classes and small fragmentation in
masks. To mitigate these issues, we propose a Language-Driven Visual Consensus
(LDVC) approach, fostering improved alignment of semantic and visual
information.Specifically, we leverage class embeddings as anchors due to their
discrete and abstract nature, steering vision features toward class embeddings.
Moreover, to circumvent noisy alignments from the vision part due to its
redundant nature, we introduce route attention into self-attention for finding
visual consensus, thereby enhancing semantic consistency within the same
object. Equipped with a vision-language prompting strategy, our approach
significantly boosts the generalization capacity of segmentation models for
unseen classes. Experimental results underscore the effectiveness of our
approach, showcasing mIoU gains of 4.5 on the PASCAL VOC 2012 and 3.6 on the
COCO-Stuff 164k for unseen classes compared with the state-of-the-art methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要