FADER: Fast Adversarial Example Rejection

Francesco Crecchi
Francesco Crecchi
Angelo Sotgiu
Angelo Sotgiu
Cited by: 0|Views6

Abstract:

Deep neural networks are vulnerable to adversarial examples, i.e., carefully-crafted inputs that mislead classification at test time. Recent defenses have been shown to improve adversarial robustness by detecting anomalous deviations from legitimate training samples at different layer representations - a behavior normally exhibited by a...More

Code:

Data:

Full Text
Bibtex
Your rating :
0

 

Tags
Comments