Forced Prosocial Behaviors towards Robots Induce Cognitive Conflict

Proceedings of the Human Factors and Ergonomics Society Annual Meeting(2023)

引用 0|浏览4
暂无评分
摘要
With the rising use of social robots, it is important to understand how to evaluate their effects on human cognition. Thus, we aimed to implicitly measure prosociality towards robots (i.e., the tendency to impart rewards to robots), using a conflict-monitoring paradigm. Here, participants completed a gambling task where they “Won” or “Lost” gambles. Afterwards, a computer assigned the outcome of their gamble to either themselves, or Cozmo, a social robot. Critically, participants had to confirm the computer's assignment using a keypress. If participants experienced conflict, we reasoned that confirming the assignment would be delayed. Results showed that participants experienced more conflict when they won a gamble but had to give it to Cozmo as shown by slower response times. These data suggest that participants experienced conflict when forced to be prosocial towards Cozmo and provide evidence that conflict monitoring can measure implicit attitudes towards robots.
更多
查看译文
关键词
robots,conflict
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要