FACTIFY3M: A benchmark for multimodal fact verification with explainability through 5W Question-Answering.

Megha Chakraborty, Khushbu Pahwa,Anku Rani,Shreyas Chatterjee,Dwip Dalal, Harshit Dave, Ritvik G, Preethi Gurumurthy,Adarsh Mahor, Samahriti Mukherjee, Aditya Pakala, Ishan Paul,Janvita Reddy, Arghya Sarkar, Kinjal Sensharma,Aman Chadha,Amit P. Sheth,Amitava Das

CoRR(2023)

引用 0|浏览22
暂无评分
摘要
Combating disinformation is one of the burning societal crises -- about 67% of the American population believes that disinformation produces a lot of uncertainty, and 10% of them knowingly propagate disinformation. Evidence shows that disinformation can manipulate democratic processes and public opinion, causing disruption in the share market, panic and anxiety in society, and even death during crises. Therefore, disinformation should be identified promptly and, if possible, mitigated. With approximately 3.2 billion images and 720,000 hours of video shared online daily on social media platforms, scalable detection of multimodal disinformation requires efficient fact verification. Despite progress in automatic text-based fact verification (e.g., FEVER, LIAR), the research community lacks substantial effort in multimodal fact verification. To address this gap, we introduce FACTIFY 3M, a dataset of 3 million samples that pushes the boundaries of the domain of fact verification via a multimodal fake news dataset, in addition to offering explainability through the concept of 5W question-answering. Salient features of the dataset include: (i) textual claims, (ii) ChatGPT-generated paraphrased claims, (iii) associated images, (iv) stable diffusion-generated additional images (i.e., visual paraphrases), (v) pixel-level image heatmap to foster image-text explainability of the claim, (vi) 5W QA pairs, and (vii) adversarial fake news stories.
更多
查看译文
关键词
multimodal factify3m verification,explainability,benchmark,question-answering
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要