Detecting Deep-Fake Videos from Appearance and Behavior

arxiv(2020)

引用 71|浏览35
暂无评分
摘要
Synthetically-generated audios and videos -- so-called deep fakes -- continue to capture the imagination of the computer-graphics and computer-vision communities. At the same time, the democratization of access to technology that can create sophisticated manipulated video of anybody saying anything continues to be of concern because of its power to disrupt democratic elections, commit small to large-scale fraud, fuel dis-information campaigns, and create non-consensual pornography. We describe a biometric-based forensic technique for detecting face-swap deep fakes. This technique combines a static biometric based on facial recognition with a temporal, behavioral biometric based on facial expressions and head movements, where the behavioral embedding is learned using a CNN with a metric-learning objective function. We show the efficacy of this approach across several large-scale video datasets, as well as in-the-wild deep fakes.
更多
查看译文
关键词
computer-graphics,computer-vision communities,sophisticated manipulated video,democratic elections,large-scale fraud,disinformation campaigns,nonconsensual pornography,biometric-based forensic technique,face-swap deep fakes,facial recognition,facial expressions,behavioral embedding,large-scale video datasets,in-the-wild deep fakes,deep-fake video detection,metric-learning objective function,CNN
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要