Nesterov Adam Iterative Fast Gradient Method for Adversarial Attacks

ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT I(2022)

引用 1|浏览3
暂无评分
摘要
Deep Neural Networks (DNNs) are vulnerable to adversarial examples that mislead DNNs with imperceptible perturbations. Existing adversarial attacks often exhibit weak transferability under the black-box setting, especially when attacking the models with defense mechanisms. In this work, we regard the adversarial example generation problem as the problem of optimizing DNNs, and propose Nesterov Adam Iterative Fast Gradient Method (NAI-FGM) which applies Nesterov accelerated gradient and Adam to iterative attacks to improve the transferability of the gradient-based attack method so as to adjust the attack step size by itself and avoid local optimum more effectively. Empirical results on ImageNet dataset demonstrate that NAI-FGM could improve transferability of adversarial examples. Under the setting of ensemble model, the integrated method of NAI-FGM with various input transformations could achieve an average attack success rate of 91.88% against six advanced defense models, 1.78%-3.3% higher than the benchmarks. Code is available at https://github.com/NinelM/NAI-FGM.
更多
查看译文
关键词
Adversarial examples, Nesterov accelerated gradient, Adam optimization algorithm
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要