Advflow: Inconspicuous Black-Box Adversarial Attacks Using Normalizing Flows

ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020)(2020)

引用 51|浏览644
暂无评分
摘要
Deep learning classifiers are susceptible to well-crafted, imperceptible variations of their inputs, known as adversarial attacks. In this regard, the study of powerful attack models sheds light on the sources of vulnerability in these classifiers, hopefully leading to more robust ones. In this paper, we introduce AdvFlow: a novel black-box adversarial attack method on image classifiers that exploits the power of normalizing flows to model the density of adversarial examples around a given target image. We see that the proposed method generates adversaries that closely follow the clean data distribution, a property which makes their detection less likely. Also, our experimental results show competitive performance of the proposed approach with some of the existing attack methods on defended classifiers. The code is available at https: //github com/hmdolatabadi/AdvFlow.
更多
查看译文
关键词
normalizing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要