Evaluating visual encoding quality of a mixed reality user interface for human–machine co-assembly in complex operational terrain

Zhuo Wang,Xiangyu Zhang, Liang Li, Yiliang Zhou, Zexin Lu,Yuwei Dai, Chaoqian Liu, Zekun Su,Xiaoliang Bai,Mark Billinghurst

Advanced Engineering Informatics(2023)

引用 0|浏览19
暂无评分
摘要
During human–machine collaboration in manufacturing activities, it is important to provide real-time annotations in the three-dimensional workspace for local workers who may lack relevant experience and knowledge. For example, in MR assembly, workers need to be alerted to avoid entering hazardous areas when manually replacing components. Recently, many researchers have explored various visual cues for expressing physical task progress information in the MR interface of intelligent systems. However, the relationship between the implantation of visual cues and the balance of interface cognition has not been well revealed, especially in tasks that require annotating hazardous areas in complex operational terrains. In this study, we developed a novel MR interface for an intelligent assembly system that supports local scene sharing based on dynamic 3D reconstruction, remote expert behavior intention recognition based on deep learning, and local personnel operational behavior visual feedback based on external bounding box. We compared the encoding results of the proposed MR interface with 3D annotations combined with 3D sketch cues (3DS), which combines 3D spatial cues (3DSC) and 3DS combined with adaptive cues (AVC), through a case study. We found that for physical tasks that require specific area annotations, 3D annotations with context (3DAC) can better improve the quality of manual work and regulate the cognitive load distribution of the MR interface more reasonably.
更多
查看译文
关键词
Visual encoding,Mixed reality,User interface,Co-assembly,Complex operational terrain
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要