Transfer Attacks Revisited: A Large-Scale Empirical Study in Real Computer Vision Settings

2022 IEEE Symposium on Security and Privacy (SP)(2022)

引用 12|浏览64
暂无评分
摘要
One intriguing property of adversarial attacks is their “transferability” – an adversarial example crafted with respect to one deep neural network (DNN) model is often found effective against other DNNs as well. Intensive research has been conducted on this phenomenon under simplistic controlled conditions. Yet, thus far there is still a lack of comprehensive understanding about transferability-based attacks (“transfer attacks”) in real-world environments.To bridge this critical gap, we conduct the first large-scale systematic empirical study of transfer attacks against major cloud-based MLaaS platforms, taking the components of a real transfer attack into account. The study leads to a number of interesting findings which are inconsistent to the existing ones, including: (i) Simple surrogates do not necessarily improve real transfer attacks. (ii) No dominant surrogate architecture is found in real transfer attacks. (iii) It is the gap between posterior (output of the softmax layer) rather than the gap between logit (so-called κ value) that increases transferability. Moreover, by comparing with prior works, we demonstrate that transfer attacks possess many previously unknown properties in real-world environments, such as (i) Model similarity is not a well-defined concept. (ii) L 2 norm of perturbation can generate high transferability without usage of gradient and is a more powerful source than L norm. We believe this work sheds light on the vulnerabilities of popular MLaaS platforms and points to a few promising research directions. 1
更多
查看译文
关键词
transfer attacks revisited,adversarial attacks,transferability-based attacks,transfer attack
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要