Quantifying Perceptual Distortion of Adversarial Examples

arXiv: Machine Learning, 2019.

Cited by: 0|Views23
EI

Abstract:

Recent work has shown that additive threat models, which only permit the addition of bounded noise to the pixels of an image, are insufficient for fully capturing the space of imperceivable adversarial examples. For example, small rotations and spatial transformations can fool classifiers, remain imperceivable to humans, but have large ...More

Code:

Data:

Your rating :
0

 

Tags
Comments