Purifying Adversarial Perturbation with Adversarially Trained Auto-encoders

Hebi Li
Hebi Li
Qi Xiao
Qi Xiao
Shixin Tian
Shixin Tian

arXiv: Learning, 2019.

Cited by: 3|Bibtex|Views2
EI
Other Links: dblp.uni-trier.de|academic.microsoft.com|arxiv.org

Abstract:

Machine learning models are vulnerable to adversarial examples. Iterative adversarial training has shown promising results against strong white-box attacks. However, adversarial training is very expensive, and every time a model needs to be protected, such expensive training scheme needs to be performed. In this paper, we propose to app...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments