Ex(plainable) Machina: how social-implicit XAI affects complex human-robot teaming tasks.

ICRA(2023)

引用 1|浏览0
暂无评分
摘要
In this paper, we investigated how shared experience-based counterfactual explanations affected people's performance and robots' persuasiveness during a decision-making task in a social HRI context. We used the Connect 4 game as a complex decision-making task where participants and the robot had to play as a team against the computer. We compared two strategies of explanation generation (classical vs shared experience-based) and investigated their differences in terms of team performance, the robot's persuasive power, and participants' perception of the robot and self. Our results showed that the two explanation strategies led to comparable performances. Moreover, shared experience-based explanations - based on the team's previous games - gave higher persuasiveness to the robot's suggestions than classical ones. Finally, we noted that low-performers tend to follow the robot more than high-performers, providing insights into the potential danger for non-expert users interacting with expert explainable robots.
更多
查看译文
关键词
classical vs shared experience-based,complex decision-making task,Connect 4 game,expert explainable robots,explanation generation,explanation strategies,high-performers,higher persuasiveness,human-robot teaming tasks,low-performers,participants,persuasive power,shared experience-based counterfactual explanations affected people,shared experience-based explanations,social HRI context,social-implicit XAI,team performance
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要