What's at Stake? Robot explanations matter for high but not low-stake scenarios

2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN(2023)

引用 0|浏览3
暂无评分
摘要
Although the field of Explainable Artificial Intelligence (XAI) in Human-Robot Interaction is gathering increasing attention, how well different explanations compare across HRI scenarios is still not well understood. We conducted an exploratory online study with 335 participants analysing the interaction between type of explanation (counterfactual, feature-based, and no explanation), the stake of the scenario (high, low) and the application scenario (healthcare, industry). Participants viewed one of 12 different vignettes depicting a combination of these three factors and rated their system understanding and trust in the robot. Compared to no explanation, both counterfactual and feature-based explanations improved system understanding and performance trust (but not moral trust). Additionally, when no explanation was present, high-stake scenarios led to significantly worse performance trust and system understanding. These findings suggest that explanations can be used to calibrate users' perceptions of the robot in high-stake scenarios.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要