Divide, Denoise, and Defend against Adversarial Attacks

arXiv: Computer Vision and Pattern Recognition, Volume abs/1802.06806, 2018.

Cited by: 12|Bibtex|Views6
EI
Other Links: dblp.uni-trier.de|academic.microsoft.com|arxiv.org

Abstract:

Deep neural networks, although shown to be a successful class of machine learning algorithms, are known to be extremely unstable to adversarial perturbations. Improving the robustness of neural networks against these attacks is important, especially for security-critical applications. To defend against such attacks, we propose dividing th...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments