HEAR: Hearing Enhanced Audio Response for Video-grounded Dialogue
EMNLP 2023(2023)
摘要
Video-grounded Dialogue (VGD) aims to answer questions regarding a given
multi-modal input comprising video, audio, and dialogue history. Although there
have been numerous efforts in developing VGD systems to improve the quality of
their responses, existing systems are competent only to incorporate the
information in the video and text and tend to struggle in extracting the
necessary information from the audio when generating appropriate responses to
the question. The VGD system seems to be deaf, and thus, we coin this symptom
of current systems' ignoring audio data as a deaf response. To overcome the
deaf response problem, Hearing Enhanced Audio Response (HEAR) framework is
proposed to perform sensible listening by selectively attending to audio
whenever the question requires it. The HEAR framework enhances the accuracy and
audibility of VGD systems in a model-agnostic manner. HEAR is validated on VGD
datasets (i.e., AVSD@DSTC7 and AVSD@DSTC8) and shows effectiveness with various
VGD systems.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要