Drivers' Understanding of Artificial Intelligence in Automated Driving Systems: A Study of a Malicious Stop Sign

JOURNAL OF COGNITIVE ENGINEERING AND DECISION MAKING(2022)

引用 3|浏览12
暂无评分
摘要
Automated Driving Systems (ADS), like many other systems people use today, depend on successful Artificial Intelligence (AI) for safe roadway operations. In ADS, an essential function completed by AI is the computer vision techniques for detecting roadway signs by vehicles. The AI, though, is not always reliable and sometimes requires the human's intelligence to complete a task. For the human to collaborate with the AI, it is critical to understand the human's perception of AI. In the present study, we investigated how human drivers perceive the AI's capabilities in a driving context where a stop sign is compromised and how knowledge, experience, and trust related to AI play a role. We found that participants with more knowledge of AI tended to trust AI more, and those who reported more experience with AI had a greater understanding of AI. Participants correctly deduced that a maliciously manipulated stop sign would be more difficult for Al to identify. Nevertheless, participants still overestimated the AI's ability to recognize the malicious stop sign. Our findings suggest that the public do not yet have a sufficiently accurate understanding of specific AI systems, which leads them to over-trust the AI in certain conditions.
更多
查看译文
关键词
Malicious attack, artificial intelligence computer vision, artificial intelligence in automated driving systems, understanding of artificial intelligence, trust in artificial intelligence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要