Believe in Artificial Intelligence? A User Study on the ChatGPT's Fake Information Impact

IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS(2023)

引用 1|浏览9
暂无评分
摘要
Technological evolution has enabled the development of new artificial intelligence (AI) models with generative capabilities. Among them, one of the most discussed is the virtual agent ChatGPT. This chatbot may occasionally produce fake information, as also declared by the producer OpenAI. Such a model may provide very useful support in several tasks, ranging from text summarization to programming. The research community has marginally investigated the impact that fake information created by AI models has on the users' perceptions and on their belief in AI. We analyzed the impact of the fake information produced by AI on user perceptions, specifically trust and satisfaction, by performing a user study on ChatGPT. An additional issue is assessing whether the early or late knowledge of the possibility of the tool generating fake information has a different impact on the users' perceptions. We conducted an experiment, involving 62 university students, a category of users who may employ tools such as ChatGPT extensively. The experiment consisted of a guided interaction with ChatGPT. Some of the participants experienced the failure of the chatbot, while a control group only received correct and reliable answers. We collected participants' perceptions of trust, satisfaction, and usability, together with the net promoter score (NPS). The results demonstrated a statistically significant difference in trust and satisfaction between the users who early experienced fake information production compared to those who discovered ChatGPT's faulty behaviors later during the interaction. Also, there is no statistically significant difference among the users who received the late fake information and the control group (no fake information). Usability and the NPS also resulted higher when the fake news was detected in the late interaction. When users are aware of the fake information generated by ChatGPT their trust and satisfaction decrease, especially when they impact on this at the early stage of use of the chatbot. Nevertheless, the perception of trust and satisfaction still remains high, as some of the users are still enthusiastic; others consider a more conscious use of the tool in terms of support to be verified. A useful strategy could be to favor a critical use of ChatGPT, letting young people to verify the provided information. This should be a new way to perform learning activities.
更多
查看译文
关键词
Believe artificial intelligence (AI),ChatGPT,controlled experiment,fake information,trust in AI
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要