Explicit Correlation Learning for Generalizable Cross-Modal Deepfake Detection
CoRR(2024)
摘要
With the rising prevalence of deepfakes, there is a growing interest in
developing generalizable detection methods for various types of deepfakes.
While effective in their specific modalities, traditional detection methods
fall short in addressing the generalizability of detection across diverse
cross-modal deepfakes. This paper aims to explicitly learn potential
cross-modal correlation to enhance deepfake detection towards various
generation scenarios. Our approach introduces a correlation distillation task,
which models the inherent cross-modal correlation based on content information.
This strategy helps to prevent the model from overfitting merely to
audio-visual synchronization. Additionally, we present the Cross-Modal Deepfake
Dataset (CMDFD), a comprehensive dataset with four generation methods to
evaluate the detection of diverse cross-modal deepfakes. The experimental
results on CMDFD and FakeAVCeleb datasets demonstrate the superior
generalizability of our method over existing state-of-the-art methods. Our code
and data can be found at
.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要