Non-Lipschitz Attack: A More Sparse Adversarial Attack via Non-Lipshitz lp Regularization

CSIAM TRANSACTIONS ON APPLIED MATHEMATICS(2023)

引用 0|浏览3
暂无评分
摘要
Deep neural networks are considerably vulnerable to adversarial attacks. Therein, sparse attacks mislead image classifiers with a sparse, pixel-level perturbation that alters few pixels, and have much potential in physical world applications. The existing sparse attacks are mostly based on l (0) optimization, and there are few theoretical results in these works. In this paper, we propose a novel sparse attack approach named the non-Lipschitz attack (NLA). For the proposed l (p) ( 0< p < 1) regularization attack model, we derive a lower bound theory that indicates a support inclusion analysis. Based on these discussions, we naturally extend previous works to present an iterative algorithm with support shrinking and thresholding strategies, as well as an efficient ADMM inner solver. Experiments show that our NLA method outperforms comparative attacks on several datasets with different networks in both targeted and untargeted scenarios. Our NLA achieves the 100% attack success rate in almost all cases, and the pixels perturbed are roughly 14% fewer than the recent l (0) attack FMN-l (0) on average.
更多
查看译文
关键词
Sparse adversarial attack,l(p) (0< p < 1) regularization,lower bound theory,support shrinkage,ADMM
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要