DIBS: Enhancing Dense Video Captioning with Unlabeled Videos via Pseudo Boundary Enrichment and Online Refinement
arxiv(2024)
摘要
We present Dive Into the BoundarieS (DIBS), a novel pretraining framework for
dense video captioning (DVC), that elaborates on improving the quality of the
generated event captions and their associated pseudo event boundaries from
unlabeled videos. By leveraging the capabilities of diverse large language
models (LLMs), we generate rich DVC-oriented caption candidates and optimize
the corresponding pseudo boundaries under several meticulously designed
objectives, considering diversity, event-centricity, temporal ordering, and
coherence. Moreover, we further introduce a novel online boundary refinement
strategy that iteratively improves the quality of pseudo boundaries during
training. Comprehensive experiments have been conducted to examine the
effectiveness of the proposed technique components. By leveraging a substantial
amount of unlabeled video data, such as HowTo100M, we achieve a remarkable
advancement on standard DVC datasets like YouCook2 and ActivityNet. We
outperform the previous state-of-the-art Vid2Seq across a majority of metrics,
achieving this with just 0.4
by Vid2Seq.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要