Generating universal language adversarial examples by understanding and enhancing the transferability across neural models

Liping Yuan
Liping Yuan
Xiaoqing Zheng
Xiaoqing Zheng
Yi Zhou
Yi Zhou
Cited by: 0|Bibtex|Views12
Other Links: arxiv.org

Abstract:

Deep neural network models are vulnerable to adversarial attacks. In many cases, malicious inputs intentionally crafted for one model can fool another model in the black-box attack setting. However, there is a lack of systematic studies on the transferability of adversarial examples and how to generate universal adversarial examples. In...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments