TeGA: A Text-Guided Generative-based Approach in Cheapfake Detection.
International Conference on Multimedia Retrieval(2024)
Abstract
The rise of social media enables access to valuable information but also fuels the spread of fake news and misinformation. Cheapfake is a type of misinformation created through simple techniques, often involving the use of unaltered images with misleading captions. To distinguish between Out-of-Context (OOC) and Not-Out-of-Context (NOOC) image-caption pairs, prior research has used text-to-image generative models to generate images from captions and then extract correlations between the generated images and the original images. Despite being unable to identify contradictions in the caption pairs, the aforementioned work has demonstrated promising potential for using generative models in cheapfake detection. In this paper, we introduce a novel framework that leverages a generative model to join the contents of an original image and a caption into a new image, referred to as a context-synthetic image. To convert the quantitative difference between an original image and a context-synthetic image, termed as the contextual deviation value, into OOC and NOOC labels, we train a classification model on a newly curated dataset of 7144 context-synthetic images generated using the Stable Diffusion model. We believe that our work offers valuable insights into the use of generative models for cheapfake detection, paving the way for future advancements in this field.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined