Transfer Learning For Videos: From Action Recognition To Sign Language Recognition

2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)(2020)

引用 16|浏览12
暂无评分
摘要
In this paper, we propose using Inflated 3D (I3D) Convolutional Neural Networks for large-scale signer-independent sign language recognition (SLR). Unlike other recent methods, our method relies only on RGB video data and does not require other modalities such as depth. This is beneficial for many applications in which depth data is not available. We show that transferring spatiotemporal features from a large-scale action recognition dataset is highly valuable to the training for SLR. Based on an architecture for action recognition [1], we use two-stream I3D ConvNets operating on RGB and optical flow images. Our method is evaluated on the ChaLearn249 Isolated Gesture Recognition dataset and clearly outperforms other state-of-the-art RGB-based methods.
更多
查看译文
关键词
Sign language recognition, video transfer learning, 3D CNNs
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要