Robustness of Classifiers to Universal Perturbations: A Geometric Perspective
international conference on learning representations, 2018.
EI
Abstract:
Deep networks have recently been shown to be vulnerable to universal perturbations: there exist very small image-agnostic perturbations that cause most natural images to be misclassified by such classifiers. In this paper, we provide a quantitative analysis of the robustness of classifiers to universal perturbations, and draw a formal lin...More
Code:
Data:
Full Text
Tags
Comments