Human-Centered, Ergonomic Wearable Device with Computer Vision Augmented Intelligence for VR Multimodal Human-Smart Home Object Interaction.

HRI(2019)

引用 14|浏览38
暂无评分
摘要
In the future, Human-Robot Interaction should be enabled by a compact, human-centered and ergonomic wearable device that can merge human and machine altogether seamlessly by constantly identifying each other's intentions. In this paper, we will showcase the use of an ergonomic and lightweight wearable device that can identify human's eye/facial gestures with physiological signal measurements. Since human's intentions are usually coupled with eye movements and facial expressions, through proper design of interactions using these gestures, we can let people interact with the robots or smart home objects naturally. Combined with Computer Vision object recognition algorithms, we can allow people use very simple and straightforward communication strategies to operate telepresence robot and control smart home objects remotely, totally "Hands-Free". People can wear a VR head-mounted display and see through the robot's eyes (the remote camera attached on the robot) and interact with the smart home devices intuitively by simple facial gestures or blink of the eyes. It is tremendous beneficial for the people with motor impairment as an assistive tool. For the normal people without disabilities, they can also free their hands to do other tasks and operate the smart home devices at the same time as multimodal control strategies.
更多
查看译文
关键词
Wearable Device,Smart Home,Telepresence Robot,Virtual Reality,Human-Computer Interaction,Human-Centered Design
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要