谷歌浏览器插件
订阅小程序
在清言上使用

Spatial-Temporal Separable Attention for Video Action Recognition

2022 International Conference on Frontiers of Artificial Intelligence and Machine Learning (FAIML)(2022)

引用 0|浏览25
暂无评分
摘要
Convolutional neural networks (CNNs) have been proved as a efficient method for various of visual recognition tasks. However, it is more difficult for CNNs to capture long-range spatial-temporal cues in dynamic videos than in static images. Recent nonlocal neural networks attempt to overcome this problem by a self-attention mechanism, where pair-wise affinities for all the spatial-temporal positions are calculated. However, this introduces a substantial computational burden. In this paper, we propose a spatial-temporal separable attention module (STSAM) to reduce the computational complexity. The experimental results, based on the Kinetics 400 benchmark, show that our model achieves better performance but introduces less extra FLOPs than nonlocal neural networks.
更多
查看译文
关键词
video action recognition,attention mechanism,nonlocal neural networks,convolutional neural network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要