Robustness of classifiers: from adversarial to random noise

ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), pp. 1632-1640, 2016.

Cited by: 206|Bibtex|Views4
EI
Other Links: dblp.uni-trier.de|academic.microsoft.com|arxiv.org

Abstract:

Several recent works have shown that state-of-the-art classifiers are vulnerable to worst-case (i.e., adversarial) perturbations of the datapoints. On the other hand, it has been empirically observed that these same classifiers are relatively robust to random noise. In this paper, we propose to study a semi-random noise regime that genera...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments