People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior
CoRR(2024)
摘要
Cognitive science can help us understand which explanations people might
expect, and in which format they frame these explanations, whether causal,
counterfactual, or teleological (i.e., purpose-oriented). Understanding the
relevance of these concepts is crucial for building good explainable AI (XAI)
which offers recourse and actionability. Focusing on autonomous driving, a
complex decision-making domain, we report empirical data from two surveys on
(i) how people explain the behavior of autonomous vehicles in 14 unique
scenarios (N1=54), and (ii) how they perceive these explanations in terms of
complexity, quality, and trustworthiness (N2=356). Participants deemed
teleological explanations significantly better quality than counterfactual
ones, with perceived teleology being the best predictor of perceived quality
and trustworthiness. Neither the perceived teleology nor the quality were
affected by whether the car was an autonomous vehicle or driven by a person.
This indicates that people use teleology to evaluate information about not just
other people but also autonomous vehicles. Taken together, our findings
highlight the importance of explanations that are framed in terms of purpose
rather than just, as is standard in XAI, the causal mechanisms involved. We
release the 14 scenarios and more than 1,300 elicited explanations publicly as
the Human Explanations for Autonomous Driving Decisions (HEADD) dataset.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要