Bag of Tricks to Boost Adversarial Transferability
CoRR(2024)
摘要
Deep neural networks are widely known to be vulnerable to adversarial
examples. However, vanilla adversarial examples generated under the white-box
setting often exhibit low transferability across different models. Since
adversarial transferability poses more severe threats to practical
applications, various approaches have been proposed for better transferability,
including gradient-based, input transformation-based, and model-related
attacks, . In this work, we find that several tiny changes in the existing
adversarial attacks can significantly affect the attack performance, , the
number of iterations and step size. Based on careful studies of existing
adversarial attacks, we propose a bag of tricks to enhance adversarial
transferability, including momentum initialization, scheduled step size, dual
example, spectral-based input transformation, and several ensemble strategies.
Extensive experiments on the ImageNet dataset validate the high effectiveness
of our proposed tricks and show that combining them can further boost
adversarial transferability. Our work provides practical insights and
techniques to enhance adversarial transferability, and offers guidance to
improve the attack performance on the real-world application through simple
adjustments.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要