Contextual Adversarial Attacks For Object Detection

2020 IEEE International Conference on Multimedia and Expo (ICME)(2020)

引用 30|浏览75
暂无评分
摘要
The recent advances in adversarial attack techniques have witnessed the success of attacking high-quality CNN-based object detectors. However, in literature, the adversarial attack algorithms on object detection mainly focus on disturbing optimization objectives (i.e., classification and regression loss), which are sub-optimal due to ignoring contextual information. Novelly, we propose contextual adversarial perturbation (CAP) to attack the contextual information, which is more effective to degrade the mAP and recall of object detectors. Particularly, our CAP does not rely on ground-truth information to generate adversarial examples and thus leads to stronger generalization ability. Remarkably, we further design a contextual background loss and degrade the mAP and recall to almost 0.00%. Extensive experiments on PASCAL VOC and MS COCO datasets demonstrate the effectiveness of our attacks on both fully and weakly supervised object detectors.
更多
查看译文
关键词
Adversarial attack,contextual information,object detection,weakly supervised object detection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要