Virtual Personal Assistant Design Effects on Memory Encoding

A.F. Chesser, K.N. Bramlett,A. Atchley, C.E. Gray,N.L. Tenhundfeld

2022 Systems and Information Engineering Design Symposium (SIEDS)(2022)

引用 0|浏览1
暂无评分
摘要
Virtual personal assistants (VPAs) like Siri and Alexa have become common objects in households. Users frequently rely on these systems to search the internet or help retrieve information. As such, it is important to know how using these products affect cognitive processes like memory. Previous research suggests that visual speech perception influences auditory perception in human-human interactions. However, many of these VPAs are designed as a box or sphere that does not interact with the user visually. This lack of visual speech perception when interacting with a VPA could affect the human interaction with a system and their retention of information such as determining how many ounces are in a cup or how to greet someone in another language. This poses the question of whether the design of these VPAs is preventing the ability of users to retain the information they get from these systems. To test this, we designed an experiment that will explore interactions between user memory and either a traditional audio presentation (as is found with Siri or Alexa, for example) or one that allows for visual speech perception. Participants were asked to listen to an audio clip of a nonsensical story. In one condition, participants were asked to listen while looking at a blank screen (analogous to the lack of visual feedback inherent when working with current VPA designs). After a block of 25 audio clips, the participants took a test on the information heard. This process was repeated with an animated face with synchronized mouth movements instead of a black screen. Other participants will experience the same two presentations, but in reverse order as to counterbalance condition presentation. Data collection is currently underway. We predicted that VPA paired with synchronized lip movement would promote visual speech perception and thus help participants retain information. While we are still collecting data, the trend currently does not show a significant difference between audio and lip movement conditions. This could be an indication of differing abilities in lipreading.
更多
查看译文
关键词
virtual personal assistant (VPA),lip reading,audio-visual,active attention,cross-modal correspondence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要