Improving Audio Captioning Models with Fine-grained Audio Features, Text Embedding Supervision, and LLM Mix-up Augmentation
ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2023)
摘要
Automated audio captioning (AAC) aims to generate informative descriptions
for various sounds from nature and/or human activities. In recent years, AAC
has quickly attracted research interest, with state-of-the-art systems now
relying on a sequence-to-sequence (seq2seq) backbone powered by strong models
such as Transformers. Following the macro-trend of applied machine learning
research, in this work, we strive to improve the performance of seq2seq AAC
models by extensively leveraging pretrained models and large language models
(LLMs). Specifically, we utilize BEATs to extract fine-grained audio features.
Then, we employ Instructor LLM to fetch text embeddings of captions, and infuse
their language-modality knowledge into BEATs audio features via an auxiliary
InfoNCE loss function. Moreover, we propose a novel data augmentation method
that uses ChatGPT to produce caption mix-ups (i.e., grammatical and compact
combinations of two captions) which, together with the corresponding audio
mixtures, increase not only the amount but also the complexity and diversity of
training data. During inference, we propose to employ nucleus sampling and a
hybrid reranking algorithm, which has not been explored in AAC research.
Combining our efforts, our model achieves a new state-of-the-art 32.6 SPIDEr-FL
score on the Clotho evaluation split, and wins the 2023 DCASE AAC challenge.
更多查看译文
关键词
AAC,BEATs,LLM,mix-up,InfoNCE
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要