Automation bias with a conversational interface: User confirmation of misparsed information

2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA)(2017)

引用 0|浏览6
暂无评分
摘要
We investigate automation bias for confirming erroneous information with a conversational interface. Participants in our studies used a conversational interface to report information in a simulated intelligence, surveillance, and reconnaissance (ISR) task. In the task, for flexibility and ease of use, participants reported information to the conversational agent in natural language. Then, the conversational agent interpreted the user's reports in a human- and machine-readable language. Next, participants could accept or reject the agent's interpretation. Misparses occur when the agent incorrectly interprets the report and the user erroneously accepts it. We hypothesize that the misparses naturally occur in the experiment due to automation bias and complacency because the agent interpretation was generally correct (92%). These errors indicate some users were unable to maintain situation awareness using the conversational interface. Our results illustrate concerns for implementing a flexible conversational interface in safety critical environments (e.g., military, emergency operations).
更多
查看译文
关键词
automation bias,complacency,conversational interface,human-machine interaction,controlled natural language
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要