Towards Transferable Adversarial Attack Against Deep Face Recognition

IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY(2021)

引用 129|浏览170
暂无评分
摘要
Face recognition has achieved great success in the last five years due to the development of deep learning methods. However, deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples. In particular, the existence of transferable adversarial examples can severely hinder the robustness of DCNNs since this type of attacks can be applied in a fully black-box manner without queries on the target system. In this work, we first investigate the characteristics of transferable adversarial attacks in face recognition by showing the superiority of feature-level methods over label-level methods. Then, to further improve transferability of feature-level adversarial examples, we propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models and obtain ensemble-like effects. Extensive experiments on state-of-the-art face models with various training databases, loss functions and network architectures show that the proposed method can significantly enhance the transferability of existing attack methods. Finally, by applying DFANet to the LFW database, we generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries. This TALFW database is available to facilitate research on the robustness and defense of deep face recognition.
更多
查看译文
关键词
Face recognition, Databases, Training, Network architecture, Perturbation methods, Robustness, Benchmark testing, Adversarial example, transferable attack, black-box attack, face recognition, biometrics
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要