Each Perform Its Functions: Task Decomposition and Feature Assignment for Audio-Visual Segmentation

IEEE Transactions on Multimedia(2024)

引用 0|浏览0
暂无评分
摘要
Audio-visual segmentation (AVS) aims to segment the object instances that produce sound at the time of the video frames. Existing related solutions focus on designing cross-modal interaction mechanisms, which try to learn audio-visual correlations and simultaneously segment objects. Despite effectiveness, the close-coupling network structures become increasingly complex and hard to analyze. To address these problems, we propose a simple but effective method, Each P erforms I ts F unctions (PIF), which focuses on task decomposition and feature assignment. Inspired by human sensory experiences, PIF decouples AVS into two subtasks, correlation learning, and segmentation refinement, via two branches. Correlation learning aims to learn the correspondence between sound and visible individuals and provide the positional prior. Segmentation refinement focuses on fine segmentation. Then we assign different level features to perform the appropriate duties, i.e., using deep features for cross-modal interaction due to their semantic advantages; using rich textures of shallow features to improve segmentation results. Moreover, we propose the recurrent collaboration block to enhance interbranch communication. Experimental results on AVSBench show that our method outperforms related state-of-the-art methods by a large margin (e.g., +6.0% mIoU and +7.6% F-score on the Multi-Source subset). In addition, by purposely boosting subtasks' performance, our approach can serve as a strong baseline for audio-visual segmentation. The code is available at: https://github.com/rookiie/PIF .
更多
查看译文
关键词
Audio-visual Segmentation,Cross-modal representation,Task decomposition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要