On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses.

arXiv: Computer Vision and Pattern Recognition(2018)

引用 133|浏览43
暂无评分
摘要
Neural networks are known to be vulnerable to adversarial examples. In this note, we evaluate the two white-box defenses that appeared at CVPR 2018 and find they are ineffective: when applying existing techniques, we can reduce the accuracy of the defended models to 0%.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要