Divide, Denoise, and Defend against Adversarial Attacks
arXiv: Computer Vision and Pattern Recognition, Volume abs/1802.06806, 2018.
Deep neural networks, although shown to be a successful class of machine learning algorithms, are known to be extremely unstable to adversarial perturbations. Improving the robustness of neural networks against these attacks is important, especially for security-critical applications. To defend against such attacks, we propose dividing th...More
PPT (Upload PPT)