Advancing Dynamic Hand Gesture Recognition in Driving Scenarios with Synthetic Data.

AutomotiveUI (Adjunct Proceedings)(2023)

引用 0|浏览6
暂无评分
摘要
Creating a diverse and comprehensive dataset of hand gestures for dynamic human-machine interfaces in the automotive domain can be challenging and time-consuming. To overcome this challenge, we propose using synthetic gesture datasets generated by virtual 3D models. Our framework utilizes Unreal Engine to synthesize realistic hand gestures, offering customization options and reducing the risk of overfitting. Multiple variants, including gesture speed, performance, and hand shape, are generated to improve generalizability. Additionally, we simulate different camera locations and types, such as RGB, infrared, and depth cameras, without incurring additional time and cost to obtain these cameras. Experimental results demonstrate that our proposed framework, SynthoGestures1, enhances gesture recognition accuracy and can replace or augment real-hand datasets. By saving time and effort in dataset creation, our tool accelerates the development of gesture recognition systems for automotive applications.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要