Task-Driven Exploration: Decoupling and Inter-Task Feedback for Joint Moment Retrieval and Highlight Detection
arxiv(2024)
摘要
Video moment retrieval and highlight detection are two highly valuable tasks
in video understanding, but until recently they have been jointly studied.
Although existing studies have made impressive advancement recently, they
predominantly follow the data-driven bottom-up paradigm. Such paradigm
overlooks task-specific and inter-task effects, resulting in poor model
performance. In this paper, we propose a novel task-driven top-down framework
TaskWeave for joint moment retrieval and highlight detection. The framework
introduces a task-decoupled unit to capture task-specific and common
representations. To investigate the interplay between the two tasks, we propose
an inter-task feedback mechanism, which transforms the results of one task as
guiding masks to assist the other task. Different from existing methods, we
present a task-dependent joint loss function to optimize the model.
Comprehensive experiments and in-depth ablation studies on QVHighlights, TVSum,
and Charades-STA datasets corroborate the effectiveness and flexibility of the
proposed framework. Codes are available at
https://github.com/EdenGabriel/TaskWeave.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要