Cascade-CLIP: Cascaded Vision-Language Embeddings Alignment for Zero-Shot Semantic Segmentation
arxiv(2024)
摘要
Pre-trained vision-language models, e.g., CLIP, have been successfully
applied to zero-shot semantic segmentation. Existing CLIP-based approaches
primarily utilize visual features from the last layer to align with text
embeddings, while they neglect the crucial information in intermediate layers
that contain rich object details. However, we find that directly aggregating
the multi-level visual features weakens the zero-shot ability for novel
classes. The large differences between the visual features from different
layers make these features hard to align well with the text embeddings. We
resolve this problem by introducing a series of independent decoders to align
the multi-level visual features with the text embeddings in a cascaded way,
forming a novel but simple framework named Cascade-CLIP. Our Cascade-CLIP is
flexible and can be easily applied to existing zero-shot semantic segmentation
methods. Experimental results show that our simple Cascade-CLIP achieves
superior zero-shot performance on segmentation benchmarks, like COCO-Stuff,
Pascal-VOC, and Pascal-Context. Our code is available at:
https://github.com/HVision-NKU/Cascade-CLIP
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要