A Strong Baseline for Temporal Video-Text Alignment
CoRR(2023)
摘要
In this paper, we consider the problem of temporally aligning the video and
texts from instructional videos, specifically, given a long-term video, and
associated text sentences, our goal is to determine their corresponding
timestamps in the video. To this end, we establish a simple, yet strong model
that adopts a Transformer-based architecture with all texts as queries,
iteratively attending to the visual features, to infer the optimal timestamp.
We conduct thorough experiments to investigate: (i) the effect of upgrading ASR
systems to reduce errors from speech recognition, (ii) the effect of various
visual-textual backbones, ranging from CLIP to S3D, to the more recent
InternVideo, (iii) the effect of transforming noisy ASR transcripts into
descriptive steps by prompting a large language model (LLM), to summarize the
core activities within the ASR transcript as a new training dataset. As a
result, our proposed simple model demonstrates superior performance on both
narration alignment and procedural step grounding tasks, surpassing existing
state-of-the-art methods by a significant margin on three public benchmarks,
namely, 9.3% on HT-Step, 3.4% on HTM-Align and 4.7% on CrossTask. We believe
the proposed model and dataset with descriptive steps can be treated as a
strong baseline for future research in temporal video-text alignment. All
codes, models, and the resulting dataset will be publicly released to the
research community.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要