Adversarial Examples that Fool both Computer Vision and Time-Limited Humans

ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), pp. 3914-3924, 2018.

Cited by: 166|Bibtex|Views181
EI
Other Links: dblp.uni-trier.de|academic.microsoft.com|arxiv.org

Abstract:

Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich. However, it is still an open question whether humans are prone to similar mistakes. Here, we address this question by leveraging recent techniques that tr...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments