An Intermediate-Level Attack Framework on the Basis of Linear Regression

IEEE Transactions on Pattern Analysis and Machine Intelligence(2022)

引用 7|浏览137
暂无评分
摘要
This article substantially extends our work published at ECCV (Li et al. , 2020), in which an intermediate-level attack was proposed to improve the transferability of some baseline adversarial examples. Specifically, we advocate a framework in which a direct linear mapping from the intermediate-level discrepancies (between adversarial features and benign features) to prediction loss of the adversarial example is established. By delving deep into the core components of such a framework, we show that a variety of linear regression models can all be considered in order to establish the mapping, the magnitude of the finally obtained intermediate-level adversarial discrepancy is correlated with the transferability, and further boost of the performance can be achieved by performing multiple runs of the baseline attack with random initialization. In addition, by leveraging these findings, we achieve new state-of-the-arts on transfer-based $\ell _\infty$ and $\ell _{2}$ attacks. Our code is publicly available at https://github.com/qizhangli/ila-plus-plus-lr .
更多
查看译文
关键词
Deep neural networks,adversarial examples,adversarial transferability,generalization ability,robustness
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要