Robust Superpixel-Guided Attentional Adversarial Attack

CVPR(2020)

引用 63|浏览586
暂无评分
摘要
Deep Neural Networks are vulnerable to adversarial samples, which can fool classifiers by adding small perturbations onto the original image. Since the pioneering optimization-based adversarial attack method, many following methods have been proposed in the past several years. However most of these methods add perturbations in a \"pixel-wise\" and \"global\" way. Firstly, because of the contradiction between the local smoothness of natural images and the noisy property of these adversarial perturbations, this \"pixel-wise\" way makes these methods not robust to image processing based defense methods and steganalysis based detection methods. Secondly, we find adding perturbations to the background is less useful than to the salient object, thus the \"global\" way is also not optimal. Based on these two considerations, we propose the first robust superpixel-guided attentional adversarial attack method. Specifically, the adversarial perturbations are only added to the salient regions and guaranteed to be same within each superpixel. Through extensive experiments, we demonstrate our method can preserve the attack ability even in this highly constrained modification space. More importantly, compared to existing methods, it is significantly more robust to image processing based defense and steganalysis based detection.
更多
查看译文
关键词
adversarial samples,natural images,adversarial perturbations,defense methods,steganalysis based detection methods,robust superpixel-guided attentional adversarial attack method,attack ability,image processing based defense,deep neural networks,optimization-based adversarial attack method
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要