Self-supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection

IEEE Conference on Computer Vision and Pattern Recognition(2022)

引用 91|浏览88
暂无评分
摘要
Recent studies in deepfake detection have yielded promising results when the training and testing face forgeries are from the same dataset. However, the problem remains challenging when one tries to generalize the detector to forgeries created by unseen methods in the training dataset. This work addresses the generalizable deepfake detection from a simple principle: a generalizable representation should be sensitive to diverse types of forgeries. Following this principle, we propose to enrich the “diversity” of forgeries by synthesizing augmented forgeries with a pool of forgery configurations and strengthen the “sensitivity” to the forgeries by enforcing the model to predict the forgery configurations. To effectively explore the large forgery augmentation space, we further propose to use the adversarial training strategy to dynamically synthesize the most challenging forgeries to the current model. Through extensive experiments, we show that the proposed strategies are surprisingly effective (see Figure 1), and they could achieve superior performance than the current state-of-the-art methods. Code is available at https://github.com/liangchen527/SLADD.
更多
查看译文
关键词
Computer vision for social good, Face and gestures
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要