Trust regulation in Social Robotics: From Violation to Repair

CoRR(2023)

引用 0|浏览7
暂无评分
摘要
While trust in human-robot interaction is increasingly recognized as necessary for the implementation of social robots, our understanding of regulating trust in human-robot interaction is yet limited. In the current experiment, we evaluated different approaches to trust calibration in human-robot interaction. The within-subject experimental approach utilized five different strategies for trust calibration: proficiency, situation awareness, transparency, trust violation, and trust repair. We implemented these interventions into a within-subject experiment where participants (N=24) teamed up with a social robot and played a collaborative game. The level of trust was measured after each section using the Multi-Dimensional Measure of Trust (MDMT) scale. As expected, the interventions have a significant effect on i) violating and ii) repairing the level of trust throughout the interaction. Consequently, the robot demonstrating situation awareness was perceived as significantly more benevolent than the baseline.
更多
查看译文
关键词
social robotics,trust,regulation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要