NMI-FGSM-Tri: An Efficient and Targeted Method for Generating Adversarial Examples for Speaker Recognition

Junjian Zhang,Hao Tan, Binyue Deng, Jiacen Hu, Dong Zhu, Linyi Huang,Zhaoquan Gu

2022 7th IEEE International Conference on Data Science in Cyberspace (DSC)(2022)

引用 2|浏览5
暂无评分
摘要
Most existing deep neural networks (DNNs) are inexplicable and fragile, which can be easily deceived by carefully designed adversarial example with tiny undetectable noise. This allows attackers to cause serious consequences in many DNN-assisted scenarios without human perception. In the field of speaker recognition, the attack for speaker recognition system has been relatively mature. Most works focus on white-box attacks that assume the information of the DNN is obtainable, and only a few works study gray-box attacks. In this paper, we study blackbox attacks on the speaker recognition system, which can be applied in the real world since we do not need to know the system information. By combining the idea of transferable attack and query attack, our proposed method NMI-FGSM-Tri can achieve the targeted goal by misleading the system to recognize any audio as a registered person. Specifically, our method combines the Nesterov accelerated gradient (NAG), the ensemble attack and the restart trigger to design an attack method that generates the adversarial audios with good performance to attack blackbox DNNs. The experimental results show that the effect of the proposed method is superior to the extant methods, and the attack success rate can reach as high as 94.8% even if only one query is allowed.
更多
查看译文
关键词
speaker recognition,adversarial examples,transferability,black-box attack
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络