Generating universal language adversarial examples by understanding and enhancing the transferability across neural models
Abstract:
Deep neural network models are vulnerable to adversarial attacks. In many cases, malicious inputs intentionally crafted for one model can fool another model in the black-box attack setting. However, there is a lack of systematic studies on the transferability of adversarial examples and how to generate universal adversarial examples. In...More
Code:
Data:
Full Text
Tags
Comments