FoolChecker: A Platform to Evaluate the Robustness of Images against Adversarial Attacks

Neurocomputing(2020)

引用 7|浏览41
暂无评分
摘要
Deep neural network (DNN) is inherently vulnerable to well-designed input samples called adversarial examples, which can easily alter the output of the DNN by adding slight perturbations to the input. The recent study proved that adversarial vulnerability is caused by non-robust features and is not inherently tied to DNN. The paper presents a platform called FoolChecker to evaluate the image robustness against adversarial attacks from the perspective of image itself rather than DNN models. We define the minimum perceptual distance between the original examples and the adversarial ones to quantify the robustness against adversarial attacks. Firstly, differential evolution is applied to generate candidate perturbation units with high perturbation priority. And then, the greedy algorithm tries to add the pixel with the current highest perturbation priority into perturbation units until the DNN model is fooled. Finally, the perceptual distance of perturbation units is calculated as a index to evaluate the robustness of images against adversarial attacks. Experimental results show that the FoolChecker can give proper evaluation of the robustness of images against adversarial attacks with acceptable time.
更多
查看译文
关键词
Deep neural network,Adversarial examples,Non-robust features,Differential evolution,Greedy algorithm
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要