Only Once Attack: Fooling the Tracker With Adversarial Template

IEEE Transactions on Circuits and Systems for Video Technology(2023)

引用 1|浏览14
暂无评分
摘要
Adversarial attacks in visual object tracking aims to fool trackers via injecting invisible perturbations for the video frames. Most adversarial methods advocate generating perturbations for each video frame, but frequent attacks may increase the computational load and the risk of exposure. Unfortunately, less works are about only attacking the initial frame and their attack effects are insufficient. To tackle this, we focus on the initialization phase of tracking and propose an only once attack framework. It can effectively fool the tracker via only generating invisible perturbations for the initial template, rather than each frame. Specifically, considering the tracking mechanism of the Siamese-based trackers, we design the minimum score-based and the minimum IoU-based loss functions. Both of them are used for training the UNet-based perturbation generator instead of the tracker, achieving the non-targeted attack. Additionally, we propose the location and direction offsets as the base attacks of sophisticated targeted attack. Combined with the two basic attacks, the tracker can be easily hijacked to move towards the fake target predefined by users. Extensive experimental results demonstrate that our only once attack framework costs the least number of attacks yet achieves better attack effect, with the maximum drop of 68.7%. The transferability experiments illustrate that our attack framework with good generalization ability can be directly applicable to the CNN-based, Siamese-based, deep discriminative-based and Transformer-based trackers, without retraining.
更多
查看译文
关键词
Visual object tracking,adversarial attacks,non-targeted attack,targeted attack
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要