Mid-Air Fingertip-Based User Interaction in Mixed Reality

2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)(2018)

引用 10|浏览17
暂无评分
摘要
With data growing at a huge rate, there arises a need for advanced data visualization techniques. Visualizing these data sets in Mixed Reality(MR) mode provides an immersive experience to the user in the context of the real world applications. Most of the existing works can only be used with inordinately priced devices such as Microsoft HoloLens, Meta Glass that use proprietary hardware for data visualization and user interaction through hand gestures. In this paper, we demonstrate a cost-effective solution for data visualization using frugal devices such as Google Cardboard, VR Box etc. in MR mode. However, these devices still employ only primitive modes of interaction such as the magnetic trigger, conductive lever and have a limited user-input capability. To interact with visualizations and facilitate rich user experience, we propose the use of intuitive pointing fingertip gestural interface in the user's Field of View(FoV). The proposed pointing hand gesture recognition framework is driven by cascade of state-of-the-art deep learning model - Faster RCNN for localizing the hand followed by a proposed regression CNN for fingertip localization. We conducted both objective and subjective evaluation to demonstrate the performance of our proposed method. Objective metrics are fingertip recognition accuracy and computational time. The subjective evaluation includes user comfort and effectiveness of fingertip interaction that is proposed.
更多
查看译文
关键词
Data visualization,Google,Gesture recognition,Cameras,Computer architecture,Virtual reality,User experience
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要