Object-Oriented Video Captioning with Trajectory Graph and Attribute Exploring
arXiv (Cornell University)(2020)
摘要
Traditional video captioning requests a holistic description of the video,
yet the detailed descriptions of the specific objects may not be available.
Without associating the transition trajectories, these image-based data-driven
methods cannot understand the activities with visual features only. Besides,
adopting frame-level inter-object features and ambiguous descriptions for
training is difficult to learn the vision-language relationships. We propose a
novel task, named object-oriented video captioning, focusing on understanding
the videos in object-level. Besides, we propose the video-based object-oriented
video captioning (OVC)-Net via trajectory graph and attribute exploring to
effectively analyze the activities along time and stably capture the
vision-language connections under small-samples. The trajectory graph provides
useful supplement over previous image-based approaches, allowing to reason the
activities from the temporal evolution of visual features and the dynamic
movement of spatial locations. The attribute explorer helps to capture
discriminative features among different objects, with which the subsequent
caption generator can yield more informative and accurate descriptions.
Thereafter, we construct a new dataset with explicit object-sentence pairs to
facilitate effective cross-modal learning. To demonstrate the effectiveness, we
conduct experiments on the new dataset and compare it with the
state-of-the-arts for video captioning. From the experimental results, the
OVC-Net exhibits the ability of precisely describing the concurrent objects,
and achieves the state-of-the-art performance.
更多查看译文
关键词
video captioning,trajectory graph,object-oriented
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要