Custom Attribution Loss for Improving Generalization and Interpretability of Deepfake Detection

IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)(2022)

引用 3|浏览23
暂无评分
摘要
The simplicity and accessibility of tools for generating deepfakes pose a significant technical challenge for their detection and filtering. Many of the recently proposed methods for deeptake detection focus on a `blackbox' approach and therefore suffer from the lack of any additional information about the nature of fake videos beyond the fake or not fake labels. In this paper, we approach deepfake detection by solving the related problem of attribution, where the goal is to distinguish each separate type of a deepfake attack. We design a training approach with customized Triplet and ArcFace losses that allow to improve the accuracy of deepfake detection on several publicly available datasets, including Google and Jigsaw, FaceForensics++, HifiFace, DeeperForensics, Celeb-DF, DeepfakeTIMIT, and DF-Mobio. Using an example of Xception net as an underlying architecture, we also demonstrate that when trained for attribution, the model can be used as a tool to analyze the deepfake space and to compare it with the space of original videos.
更多
查看译文
关键词
Deepfake attribution,deepfake detection,cross-database evaluations,ArcFace loss,Triplet loss
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要