Hijacking Tracker: A Powerful Adversarial Attack On Visual Tracking

2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING(2020)

引用 22|浏览91
暂无评分
摘要
Visual object tracking has made important breakthroughs with the assistance of deep learning models. Unfortunately, recent research has clearly proved that deep learning models are vulnerable to malicious adversarial attacks, which mislead the models making wrong decisions by perturbing the input image. The threat to the models alerts us to pay attention to the model security of deep learning-based tracking algorithms. Therefore, we study the adversarial attacks against advanced trackers based on deep learning to better identify the vulnerability of tracking algorithms. In this paper, we propose to add slight adversarial perturbations to the input image by an inconspicuous but powerful attack strategy-hijacking algorithm. Specifically, the hijacking strategy misleads trackers in two aspects: one is shape hijacking that changes the shape of the model output; the other is position hijacking that gradually pushes the output to any position in the image frame. Besides, we further propose an adaptive optimization approach to integrate two hijacking mechanisms efficiently. Eventually, the hijacking algorithm results in fooling the tracker to track the wrong target gradually. The experimental results demonstrate the powerful attack ability of our method-quickly hijacking state-of-the-art trackers and reducing the accuracy of these models by more than 90% on OTB2015.
更多
查看译文
关键词
Hijacking, visual tracking, adversarial attack
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要