Deceptive AI Systems That Give Explanations Are Just as Convincing as Honest AI Systems in Human-Machine Decision Making

arxiv(2022)

引用 0|浏览14
暂无评分
摘要
The ability to discern between true and false information is essential to making sound decisions. However, with the recent increase in AI-based disinformation campaigns, it has become critical to understand the influence of deceptive systems on human information processing. In experiment (N=128), we investigated how susceptible people are to deceptive AI systems by examining how their ability to discern true news from fake news varies when AI systems are perceived as either human fact-checkers or AI fact-checking systems, and when explanations provided by those fact-checkers are either deceptive or honest. We find that deceitful explanations significantly reduce accuracy, indicating that people are just as likely to believe deceptive AI explanations as honest AI explanations. Although before getting assistance from an AI-system, people have significantly higher weighted discernment accuracy on false headlines than true headlines, we found that with assistance from an AI system, discernment accuracy increased significantly when given honest explanations on both true headlines and false headlines, and decreased significantly when given deceitful explanations on true headlines and false headlines. Further, we did not observe any significant differences in discernment between explanations perceived as coming from a human fact checker compared to an AI-fact checker. Similarly, we found no significant differences in trust. These findings exemplify the dangers of deceptive AI systems and the need for finding novel ways to limit their influence human information processing.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要