Using Multi-modal Machine Learning for User Behavior Prediction in Simulated Smart Home for Extended Reality

Virtual, Augmented and Mixed Reality: Design and Development: 14th International Conference, VAMR 2022, Held as Part of the 24th HCI International Conference, HCII 2022, Virtual Event, June 26 – July 1, 2022, Proceedings, Part I(2022)

引用 0|浏览0
暂无评分
摘要
We propose a multi-modal approach to manipulating smart home devices in a smart home environment simulated in virtual reality. Our multi-modal approach seeks to determine the user’s intent in the form of the user’s target smart home device and the desired action for that device to perform. We do this by examining information from two main modalities: spoken utterance and spatial information (such as gestures, positions, hand interactions, etc.). Our approach makes use of spoken utterance, spatial information, and additional information such as the device’s state to predict the user’s intent. Since the information contained in the user’s utterance and the spatial information can be disjoint or complementary to one another, we process the two sources of information in parallel using multiple machine learning models to determine intent. The results of these models are ensembled to produce our final prediction results. Aside from the proposed approach, we also discuss our prototype and discuss our initial findings.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要