Category-wise Attack: Transferable Adversarial Examples for Anchor Free Object Detection
arxiv(2020)
摘要
Deep neural networks have been demonstrated to be vulnerable to adversarial attacks: sutle perturbations can completely change the classification results. Their vulnerability has led to a surge of research in this direction. However, most works dedicated to attacking anchor-based object detection models. In this work, we aim to present an effective and efficient algorithm to generate adversarial examples to attack anchor-free object models based on two approaches. First, we conduct category-wise instead of instance-wise attacks on the object detectors. Second, we leverage the high-level semantic information to generate the adversarial examples. Surprisingly, the generated adversarial examples it not only able to effectively attack the targeted anchor-free object detector but also to be transferred to attack other object detectors, even anchor-based detectors such as Faster R-CNN.
更多查看译文
关键词
Category-wise attacks,adversarial attacks,object detection,anchor-free object detection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络