Attention Modeling with Temporal Shift in Sign Language Recognition

Ahmet Faruk Çelimli,Oğulcan Özdemir,Lale Akarun

2022 30th Signal Processing and Communications Applications Conference (SIU)(2022)

引用 0|浏览1
暂无评分
摘要
Sign languages are visual languages expressed with multiple cues including facial expressions, upper-body and hand gestures. These different visual cues can be used together or at different instants to convey the message. In order to recognize sign languages, it is crucial to model what, where and when to attend. In this study, we developed a model to use different visual cues at the same time by using Temporal Shift Modules (TSMs) and attention modeling. Our experiments are conducted with BospohorusSign22k dataset. Our system has achieved 92.46% recognition accuracy and improved the performance approximately 14% compared to the baseline study with 78.85% accuracy.
更多
查看译文
关键词
deep learning,temporal shift,attention modeling,sign language recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要