Transformer-based Fusion of 2D-pose and Spatio-temporal Embeddings for Distracted Driver Action Recognition
CVPR Workshops(2024)
摘要
Classification and localization of driving actions over time is important for
advanced driver-assistance systems and naturalistic driving studies. Temporal
localization is challenging because it requires robustness, reliability, and
accuracy. In this study, we aim to improve the temporal localization and
classification accuracy performance by adapting video action recognition and 2D
human-pose estimation networks to one model. Therefore, we design a
transformer-based fusion architecture to effectively combine 2D-pose features
and spatio-temporal features. The model uses 2D-pose features as the positional
embedding of the transformer architecture and spatio-temporal features as the
main input to the encoder of the transformer. The proposed solution is generic
and independent of the camera numbers and positions, giving frame-based class
probabilities as output. Finally, the post-processing step combines information
from different camera views to obtain final predictions and eliminate false
positives. The model performs well on the A2 test set of the 2023 NVIDIA AI
City Challenge for naturalistic driving action recognition, achieving the
overlap score of the organizer-defined distracted driver behaviour metric of
0.5079.
更多查看译文
关键词
2023 NVIDIA AI City Challenge,2D human-pose estimation networks,A2 test set,advanced driver-assistance systems,camera numbers,classification accuracy performance,distracted driver action recognition,false positives,frame-based class probabilities,naturalistic driving action recognition,naturalistic driving studies,positional embedding,reliability,spatio-temporal embeddings,spatio-temporal features,spatiotemporal features,temporal localization,transformer architecture,transformer-based fusion architecture,video action recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要