Towards Weakly Supervised Text-to-Audio Grounding
CoRR(2024)
摘要
Text-to-audio grounding (TAG) task aims to predict the onsets and offsets of
sound events described by natural language. This task can facilitate
applications such as multimodal information retrieval. This paper focuses on
weakly-supervised text-to-audio grounding (WSTAG), where frame-level
annotations of sound events are unavailable, and only the caption of a whole
audio clip can be utilized for training. WSTAG is superior to
strongly-supervised approaches in its scalability to large audio-text datasets.
Two WSTAG frameworks are studied in this paper: sentence-level and
phrase-level. First, we analyze the limitations of mean pooling used in the
previous WSTAG approach and investigate the effects of different pooling
strategies. We then propose phrase-level WSTAG to use matching labels between
audio clips and phrases for training. Advanced negative sampling strategies and
self-supervision are proposed to enhance the accuracy of the weak labels and
provide pseudo strong labels. Experimental results show that our system
significantly outperforms the previous WSTAG SOTA. Finally, we conduct
extensive experiments to analyze the effects of several factors on phrase-level
WSTAG. The code and model is available at
https://github.com/wsntxxn/TextToAudioGrounding.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要