CorrNet+: Sign Language Recognition and Translation via Spatial-Temporal Correlation
CoRR(2024)
摘要
In sign language, the conveyance of human body trajectories predominantly
relies upon the coordinated movements of hands and facial expressions across
successive frames. Despite the recent advancements of sign language
understanding methods, they often solely focus on individual frames, inevitably
overlooking the inter-frame correlations that are essential for effectively
modeling human body trajectories. To address this limitation, this paper
introduces a spatial-temporal correlation network, denoted as CorrNet+, which
explicitly identifies body trajectories across multiple frames. In specific,
CorrNet+ employs a correlation module and an identification module to build
human body trajectories. Afterwards, a temporal attention module is followed to
adaptively evaluate the contributions of different frames. The resultant
features offer a holistic perspective on human body movements, facilitating a
deeper understanding of sign language. As a unified model, CorrNet+ achieves
new state-of-the-art performance on two extensive sign language understanding
tasks, including continuous sign language recognition (CSLR) and sign language
translation (SLT). Especially, CorrNet+ surpasses previous methods equipped
with resource-intensive pose-estimation networks or pre-extracted heatmaps for
hand and facial feature extraction. Compared with CorrNet, CorrNet+ achieves a
significant performance boost across all benchmarks while halving the
computational overhead. A comprehensive comparison with previous
spatial-temporal reasoning methods verifies the superiority of CorrNet+. Code
is available at https://github.com/hulianyuyy/CorrNet_Plus.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要