Action Co-Localization In An Untrimmed Video By Graph Neural Networks

MULTIMEDIA MODELING (MMM 2020), PT I(2020)

引用 2|浏览118
暂无评分
摘要
We present an efficient approach for action co-localization in an untrimmed video by exploiting contextual and temporal feature from multiple action proposals. Most existing action localization methods focus on each individual action instances without accounting for the correlations among them. To exploit such correlations, we propose the Graph-based Temporal Action Co-Localization (G-TACL) method, which aggregates contextual features from multiple action proposals to assist temporal localization. This aggregation procedure is achieved with Graph Neural Networks with nodes initialized by the action proposal representations. In addition, a multi-level consistency evaluator is proposed to measure the similarity, which summarizes low-level temporal coincidences, features vector dot products and high-level contextual features similarities between any two proposals. Subsequently, these nodes are iteratively updated with Gated Recurrent Unit (GRU) and the obtained node features are used to regress the temporal boundaries of the action proposals, and finally to localize the action instances. Experiments on the THUMOS'14 and MEXaction2 datasets have demonstrated the efficacy of our proposed method.
更多
查看译文
关键词
Temporal action co-localization, Multi-level consistency evaluator
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要