MagicGAN: Multiagent Attacks Generate Interferential Category via GAN.

Knowl. Based Syst.(2022)

引用 2|浏览11
暂无评分
摘要
Deep neural networks are vulnerable to interference categories, which can deceive trained models with imperceptible adversarial perturbations. More crucially, the transferability of adversarial samples has been confirmed, specifically, an adversarial sample crafted against a source agent model can transfer to other target models, which results in the adversary posing a security threat to applications in black -box scenarios. However, the existing transfer-based attacks merely consider a single agent model to create the adversarial samples, leading to poor transferability. In this paper, we propose a novel attack method called Multiagent Attacks Generate Interferential Category via GAN (MagicGAN). Specifically, to avoid the adversarial samples overfitting a single source agent, we design a multiagent discriminator, which can fit the decision boundaries of the various target models to provide more diversified gradient information for the generation of adversarial perturbations. Therefore, the generalization of our method is effectively improved, that is, the adversarial transferability of the adversarial sample is enhanced. In addition, to avoid the pattern collapse of the GAN-based adversarial approach, we construct a novel latent data distance constraint to enhance the compatibility between the latent adversarial sample distances and the corresponding data adversarial sample distances. Therefore, MagicGAN can more effectively generate a distribution close to the adversarial data. Extensive experiments on CelebA, CIFAR-10, MNIST and ImageNet fully validate the effectiveness and superiority of our proposed method. (c) 2022 Elsevier B.V. All rights reserved.
更多
查看译文
关键词
Adversarial sample,Transferability,Multiagent attack,Generative adversarial network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要