谷歌浏览器插件
订阅小程序
在清言上使用

Fine-Grained Motion Representation For Template-Free Visual Tracking

2020 IEEE Winter Conference on Applications of Computer Vision (WACV)(2020)

引用 3|浏览19
暂无评分
摘要
The object tracking task requires tracking the arbitrary target in consecutive video frames. Recently, several attempts have been made to develop the template-free models to attain generality. However, current template-free paradigm only estimates the displacement to approximate the motion of the object. The displacement is insufficient to represent complex bounding box transformation, including scaling and rotation. We argue that the coarse-grained representation of object motion limits the performance of current template-free approaches. In this paper, we explore the finer-grained motion estimation to improve the accuracy of the template-free model. In respect of the image space, our method estimates the transformation for each pixel in the image. Concern on the motion representation, we represent the motion by the transformation parameterized by displacement, scaling, and rotation. By applying the differential vector operators on the optical flow, our approach estimates both displacement, scaling, and rotation for each pixel in a unified theory. To the best of our knowledge, we are the first work to model the displacement, scaling and rotation in a unified theory with the optical flow. To further improve the localization accuracy, we develop the appearance branch to introduce the appearance information into our model. Furthermore, to suppress optical flow estimation failure samples during training, we propose a novel loss function Limited L1. The experiment shows our model FGTrack achieves state-of-the-art performance on both NFS and VOT2017 datasets.
更多
查看译文
关键词
motion representation,template-free visual tracking,object tracking task,video frames,coarse-grained representation,object motion,motion estimation,optical flow estimation,differential vector operators
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要