AI帮你理解科学

AI 生成解读视频

AI抽取解析论文重点内容自动生成视频


pub
生成解读视频

AI 溯源

AI解析本论文相关学术脉络


Master Reading Tree
生成 溯源树

AI 精读

AI抽取本论文的概要总结


微博一下
Changes in the appearance due to redressing of the target by rotation or otherwise, the presence of specularities and changes in illumination are a group of circumstances which are closely linked

Visual Tracking: An Experimental Survey.

IEEE transactions on pattern analysis and machine intelligence, no. 7 (2013): 1442-1468

被引用1575|浏览44
WOS EI

摘要

There is a large variety of trackers, which have been proposed in the literature during the last two decades with some mixed success. Object tracking in realistic scenarios is a difficult problem, therefore, it remains a most active area of research in computer vision. A good tracker should perform well in a large number of videos involvi...更多

代码

数据

0
简介
  • V ISUAL tracking is a hard problem as many different and varying circumstances need to be reconciled in one algorithm.
  • Given the wide variety of aspects in tracking circumstances, and the wide variety of tracking methods, it is surprising that the number of evaluation video sequences is generally limited.
  • In the papers on tracking appearing in TPAMI or in CVPR 2011, the number of different videos is only five to ten.
  • The length of the videos maybe long, one to fifteen minutes, but in five to ten different videos few of the above conditions will all be adequately tested
重点内容
  • V ISUAL tracking is a hard problem as many different and varying circumstances need to be reconciled in one algorithm
  • As a large variety of circumstances is included in many video sequences and a wide variety of trackers is included in the pool of trackers, we propose to perform objective evaluation of the performance
  • Changes in the appearance due to redressing of the target by rotation or otherwise, the presence of specularities and changes in illumination are a group of circumstances which are closely linked
  • In this paper we have proposed the use of a wide variety of videos is important to obtain a good, differentiated impression of the performance of trackers in the many different circumstances
  • We conclude that the F-score and the object tracking accuracy (OTA)-score are highly correlated, while F and the ATA-score or the F1-score are still so much correlated that no additional information is to be expected from using both
  • For the four videos with relative motion and with light occlusion3 the average F-score of the top five trackers are 0.88, 1.00, 0.76 and 0.84, respectively. This indicates that occlusion with less than 30% may be considered a solved problem
  • We have considered single object tracking, where the object is represented by a given bounding box
方法
  • The Methods of Tracking

    In this paper, trackers are divided according to their main method of tracking.
  • It is remarkable that the best performing trackers in this survey originate from all five groups
  • This demonstrates that they all solve some part of the problem of tracking.
  • In this survey it has become evident from the large distance between the best trackers and the ideal combination of trackers in Fig. 7 that in many of the proposed methods there is value for some of the circumstances of tracking.
  • Their ideal combination would solve a much larger part, as is demonstrated in the same figure showing the margin between the trackers and the best possible combination
结果
  • Before getting to the actual tracker performances, the authors evaluate the effectiveness of evaluation metrics.
  • Fig. 6 shows a plot of the metrics derived from all sequences and all trackers.
  • The correlation between the F-score and OTA is 0.99.
  • The correlation between F and ATA is 0.95.
  • The correlation between F and F1 is 0.93.
  • It is concluded that F, ATA and F1 essentially measure the same performance.
  • The authors prefer to use the F-score to have a clear distinction between success and failure which makes it easier to evaluate a large dataset
结论
  • 7.1 The Circumstances of Tracking

    The analysis and the experiments highlighted many circumstances which affect the performance of a tracker.
  • Of the nineteen trackers the authors have considered in this survey, eight have a mechanism for handling apparent changes in the scale of the object, whether it is due to a zoom in the camera or a change of the target’s distance to the camera.
  • In the experiments the authors were able to demonstrate that inclusion of such a mechanism is advantageous for all these trackers, regardless their overall performance level.
  • Where size is rarely a design consideration in tracking, the authors have been able to demonstrate a dependence of the size of the target in many trackers.
  • The length of the video is an important factor in the distinction of trackers testing their general ability to track and their model update mechanisms
总结
  • Introduction:

    V ISUAL tracking is a hard problem as many different and varying circumstances need to be reconciled in one algorithm.
  • Given the wide variety of aspects in tracking circumstances, and the wide variety of tracking methods, it is surprising that the number of evaluation video sequences is generally limited.
  • In the papers on tracking appearing in TPAMI or in CVPR 2011, the number of different videos is only five to ten.
  • The length of the videos maybe long, one to fifteen minutes, but in five to ten different videos few of the above conditions will all be adequately tested
  • Objectives:

    The authors aim to evaluate trackers systematically and experimentally on 315 video fragments covering above aspects.
  • The authors aim to group methods of tracking on the basis of their experimental performance.
  • The authors aim to evaluate the expressivity and inter-dependence of tracking performance measures.
  • In this survey the authors aimed to include trackers from as diverse origin as possible to cover the current paradigms
  • Methods:

    The Methods of Tracking

    In this paper, trackers are divided according to their main method of tracking.
  • It is remarkable that the best performing trackers in this survey originate from all five groups
  • This demonstrates that they all solve some part of the problem of tracking.
  • In this survey it has become evident from the large distance between the best trackers and the ideal combination of trackers in Fig. 7 that in many of the proposed methods there is value for some of the circumstances of tracking.
  • Their ideal combination would solve a much larger part, as is demonstrated in the same figure showing the margin between the trackers and the best possible combination
  • Results:

    Before getting to the actual tracker performances, the authors evaluate the effectiveness of evaluation metrics.
  • Fig. 6 shows a plot of the metrics derived from all sequences and all trackers.
  • The correlation between the F-score and OTA is 0.99.
  • The correlation between F and ATA is 0.95.
  • The correlation between F and F1 is 0.93.
  • It is concluded that F, ATA and F1 essentially measure the same performance.
  • The authors prefer to use the F-score to have a clear distinction between success and failure which makes it easier to evaluate a large dataset
  • Conclusion:

    7.1 The Circumstances of Tracking

    The analysis and the experiments highlighted many circumstances which affect the performance of a tracker.
  • Of the nineteen trackers the authors have considered in this survey, eight have a mechanism for handling apparent changes in the scale of the object, whether it is due to a zoom in the camera or a change of the target’s distance to the camera.
  • In the experiments the authors were able to demonstrate that inclusion of such a mechanism is advantageous for all these trackers, regardless their overall performance level.
  • Where size is rarely a design consideration in tracking, the authors have been able to demonstrate a dependence of the size of the target in many trackers.
  • The length of the video is an important factor in the distinction of trackers testing their general ability to track and their model update mechanisms
表格
  • Table1: Overview Characteristics of the Evaluation Metrics
  • Table2: Overview Characteristics of the Trackers Used in This Paper frequently used in recent tracking papers, on the aspects of light, albedo, transparency, motion smoothness, confusion, occlusion and shaking camera. 65 Sequences have been reported earlier in the PETS workshop [<a class="ref-link" id="c32" href="#r32">32</a>], and 250 are new, for a total of 315 video sequences. The main source of the data is real-life videos from YouTube with 64 different types of targets ranging from human face, a person, a ball, an octopus, microscopic cells, a plastic bag or a can. The collection is categorized for thirteen aspects of difficulty with many hard to very hard videos, like a dancer, a rock singer in a concert, complete transparent glass, octopus, flock of birds, soldier in camouflage, completely occluded object and videos with extreme zooming introducing abrupt motion of targets
  • Table3: List of Outstanding Cases Resulted from the Grubbs’ Outlier Test and with F ≥ 0.5 for MST is attributed to the increased likelihood of getting stuck in a local minimum when the target is small. For IVT and LOT the number of free parameters of the appearance model is among the largest of all trackers. Therefore, they are likely profiting from having a larger target to their disposal to learn an image model. In SPT and LOT, we find some evidence that super pixel representations are less suited for small widths. In contrast, none of the discriminative trackers are found to be sensitive to target size, demonstrating the capacity to normalize the size by going after the difference between the target and its background
Download tables as Excel
相关工作
  • Tracking is one of the most challenging computer vision problems, concerning the task of generating an inference about the motion of an object given a sequence of images. In this paper we confine ourselves to a simpler definition, which is easier to evaluate objectively: tracking is the analysis of video sequences for the purpose of establishing the location of the target over a sequence of frames (time) starting from the bounding box given in the first frame.

    2.1 Tracking Survey Papers

    Many trackers have been proposed in literature, usually in conjunction with their intended application areas. A straightforward application of target tracking is surveillance and security control, initially provided with radar and position sensor systems [15] and then with video surveillance systems. These systems are built on some typical models, namely object segmentation (often by background difference), appearance and motion model definition, prediction and probabilistic inference. For instance, [16] provides an experimental evaluation of some tracking algorithms on the AVSS, conference on advanced video and signal based surveillance, dataset for surveillance of multiple people. The focus of such reviews as is narrower still, as in [17] which discusses tracking specific targets only, such as sport players. The survey of [18] is on tracking lanes for driver assistance. Other surveys address robot applications where tracking based on a Kalman filter is well suited [19]. Yet others are focusing on a single type of target, such as humans [20], [21]. Other tracking methods are designed for moving sensors as used in navigation [22]. Recently, a survey is presented for a wired-sensor network, focusing on the capability of methods to give a simple estimation for the position of the object [23].
基金
  • The work in this paper was funded by COMMIT, the National Dutch Program for public private ICT research in the Netherlands, by EU FESR 2008 15 from the region of Emilia Romagna Italy, and by the U.S Army Research Laboratory and the U.S Army Research Office under grant W911NF-09-1-0255
引用论文
  • Z. Kalal, J. Matas, and K. Mikolajczyk, “P-N learning: Bootstrapping binary classifiers by structural constraints,” in Proc. IEEE CVPR, San Francisco, CA, USA, 2010.
    Google ScholarLocate open access versionFindings
  • W. Hu, X. Zhou, W. Li, W. Luo, X. Zhang, and S. Maybank, “Active contour-based visual tracking by integrating colors, shapes, and motions,” IEEE Trans. Image Process., vol. 22, no. 5, pp. 1778–1792, May. 2013.
    Google ScholarLocate open access versionFindings
  • X. Gao, Y. Su, X. Li, and D. Tao, “A review of active appearance models,” IEEE Trans. Syst., Man, Cybern. C,, vol. 40, no. 2, pp. 145–158, 2010.
    Google ScholarLocate open access versionFindings
  • U. Prabhu, K. Seshadri, and M. Savvides, “Automatic facial landmark tracking in video sequences using kalman filter assisted active shape models,” in Proc. ECCV, Heraklion, Greece, 2010.
    Google ScholarLocate open access versionFindings
  • J. Berclaz, F. Fleuret, E. Turetken, and P. Fua, “Multiple object tracking using k-shortest paths optimization,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 9, pp. 1806–1819, Sept. 2011.
    Google ScholarLocate open access versionFindings
  • J. Henriques, R. Caseiro, and J. Batista, “Globally optimal solution to multi-object tracking with merged measurements,” in Proc. ICCV, Barcelona, Spain, 2001.
    Google ScholarLocate open access versionFindings
  • A. R. Zamir, A. Dehghan, and M. Shah, “GMCP-tracker: Global multi-object tracking using generalized minimum clique graphs,” in Proc. 12th ECCV, Florence, Italy, 2012.
    Google ScholarLocate open access versionFindings
  • S. Pellegrini, A. Ess, and L. van Gool, “Improving data association by joint modeling of pedestrian trajectories and groupings,” in Proc. 11th ECCV, Heraklion, Greece, 2010.
    Google ScholarLocate open access versionFindings
  • L. Zhang, Y. Li, and R. Nevatia, “Global data association for multi-object tracking using network flows,” in Proc. IEEE CVPR, Anchorage, AK, USA, 2008.
    Google ScholarLocate open access versionFindings
  • A. Yilmaz, O. Javed, and M. Shah, “Object tracking: A survey,” ACM CSUR, vol. 38, no. 4, Article 13, 2006.
    Google ScholarLocate open access versionFindings
  • T. F. Chan and L. A. Vese, “Active contours without edges,” IEEE Trans. Image Process., vol. 10, no. 2, pp. 266–277, Feb. 2001.
    Google ScholarLocate open access versionFindings
  • Y. Li, H. Ai, T. Yamashita, S. Lao, and M. Kawade, “Tracking in low frame rate video: A cascade particle filter with discriminative observers of different life spans,” in Proc. IEEE CVPR, Minneapolis, MN, USA, 2007.
    Google ScholarLocate open access versionFindings
  • J. Kwon and K. Lee, “Tracking of abrupt motion using WangLandau Monte Carlo estimation,” in Proc. 10th ECCV, Marseille, France, 2008.
    Google ScholarLocate open access versionFindings
  • W. C. Siew, K. P. Seng, and L. M. Ang, “Lips contour detection and tracking using watershed region-based active contour model and modified,” IEEE Trans. Circuits Syst. Video Technol., vol. 22, no. 6, pp. 869–874, Jun. 2012.
    Google ScholarLocate open access versionFindings
  • B. Ristic and M. L. Hernandez, “Tracking systems,” in Proc. IEEE RADAR, Rome, Italy, 2008, pp. 1–2.
    Google ScholarLocate open access versionFindings
  • J. Fiscus, J. Garofolo, T. Rose, and M. Michel, “AVSS multiple camera person tracking challenge evaluation overview,” in Proc. 6th IEEE AVSS, Genova, Italy, 2009.
    Google ScholarLocate open access versionFindings
  • C. B. Santiago, A. Sousa, M. L. Estriga, L. P. Reis, and M. Lames, “Survey on team tracking techniques applied to sports,” in Proc. AIS, Povoa de Varzim, Portugal, 2010, pp. 1–6.
    Google ScholarLocate open access versionFindings
  • J. C. McCall and M. M. Trivedi, “Video-based lane estimation and tracking for driver assistance: Survey, system, and evaluation,” IEEE Trans. Intell. Transp. Syst., vol. 7, no. 1, pp. 20–37, Mar. 2006.
    Google ScholarLocate open access versionFindings
  • S. Y. Chen, “Kalman filter for robot vision: A survey,” IEEE Trans. Ind. Electron., vol. 59, no. 11, pp. 4409–4420, Nov. 2012.
    Google ScholarLocate open access versionFindings
  • T. B. Moeslund, A. Hilton, and V. Krüger, “A survey of advances in vision-based human motion capture and analysis,” CVIU, vol. 104, no. 2–3, pp. 90–126, 2006.
    Google ScholarLocate open access versionFindings
  • R. Poppe, “Vision-based human motion analysis: An overview,” CVIU, vol. 108, no. 1–2, pp. 4–18, 2007.
    Google ScholarLocate open access versionFindings
  • Z. Jia, A. Balasuriya, and S. Challa, “Recent developments in vision based target tracking for autonomous vehicles navigation,” in Proc. IEEE ITSC, Toronto, ON Canada, 2006, pp. 765–770.
    Google ScholarLocate open access versionFindings
  • O. Demigha, W. Hidouci, and T. Ahmed, “On energy efficiency in collaborative target tracking in wireless sensor network: A review,” IEEE Commun. Surv. Tuts., vol. 15, no. 99, pp. 1–13, 2012.
    Google ScholarLocate open access versionFindings
  • J. Popoola and A. Amer, “Performance evaluation for tracking algorithms using object labels,” in Proc. USA ICASSP, Las Vegas, NV, USA, 2008.
    Google ScholarLocate open access versionFindings
  • D. A. Klein, D. Schulz, S. Frintrop, and A. B. Cremers, “Adaptive real-time video-tracking for arbitrary objects,” in Proc. IEEE IROS, Taipei, Taiwan, 2010, pp. 772–777.
    Google ScholarLocate open access versionFindings
  • [Online]. Available: /home/skynet/a/sig/kng/dataset/CAVIAR [27] A. Nilski, “An evaluation metric for multiple camera tracking systems: The i-LIDS 5th scenario,” in Proc. SPIE, Cardiff, Wales, 2008.
    Google ScholarFindings
  • [28] D. Baltieri, R. Vezzani, and R. Cucchiara, “3DPes: 3D people dataset for surveillance and forensics,” in Proc. Int. ACM Workshop MA3HO, Scottsdale, AZ, USA, 2011, pp. 59–64.
    Google ScholarLocate open access versionFindings
  • [29] J. Ferryman and J. L. Crowley, Proc. IEEE Int. Workshop PETS, Boston, MA, USA, Aug. 2010.
    Google ScholarLocate open access versionFindings
  • [30] C.-H. Kuo, C. Huang, and R. Nevatia, “Multi-target tracking by on-line learned discriminative appearance models,” in Proc. IEEE CVPR, San Francisco, CA, USA, 2010, pp. 685–692.
    Google ScholarLocate open access versionFindings
  • [31] B. Karasulu and S. Korukoglu, “A software for performance evaluation and comparison of people detection and tracking methods in video processing,” MTA, vol. 55, no. 3, pp. 677–723, 2011.
    Google ScholarLocate open access versionFindings
  • [32] D. M. Chu and A. W. M. Smeulders, “Thirteen hard cases in visual tracking,” in Proc. IEEE Int. Workshop PETS, 2010.
    Google ScholarLocate open access versionFindings
  • [33] C. Erdem, B. Sankur, and A. M. Tekalp, “Performance measures for video object segmentation and tracking,” IEEE Trans. Image Process., vol. 13, no. 7, pp. 937–951, Jul. 2004.
    Google ScholarLocate open access versionFindings
  • [34] S. Salti, A. Cavallaro, and L. di Stefano, “Adaptive appearance modeling for video tracking: Survey and evaluation,” IEEE Trans. Image Process., vol. 21, no. 10, pp. 4334–4348, Oct. 2012.
    Google ScholarLocate open access versionFindings
  • [35] J. C. SanMiguel, A. Cavallaro, and J. M. Martinez, “Adaptive on-line performance evaluation of video trackers,” IEEE Trans. Image Process., vol. 21, no. 5, pp. 1828–1837, May 2012.
    Google ScholarLocate open access versionFindings
  • [36] A. T. Nghiem, F. Bremond, M. Thonnat, and V. Valentin, “Etiseo, performance evaluation for video surveillance systems,” in Proc. AVSS, London, U.K., 2007, pp. 476–481.
    Google ScholarLocate open access versionFindings
  • [37] P. Carvalho, J. S. Cardoso, and L. Corte-Real, “Filling the gap in quality assessment of video object tracking,” IVC, vol. 30, no. 9, pp. 630–640, 2012.
    Google ScholarLocate open access versionFindings
  • [38] F. Bashir and F. Porikli, “Performance evaluation of object detection and tracking systems,” in Proc. IEEE Int. Workshop PETS, 2006.
    Google ScholarLocate open access versionFindings
  • [39] R. Kasturi et al., “Framework for performance evaluation of face, text, and vehicle detection and tracking in video: Data, metrics, and protocol,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 2, pp. 319–336, Feb. 2009.
    Google ScholarLocate open access versionFindings
  • [40] K. Bernardin and R. Stiefelhagen, “Evaluating multiple object tracking performance: The clear MOT metrics,” EURASIP J. IVP, vol. 2008, no. 1, p. 246309, Feb. 2008.
    Google ScholarLocate open access versionFindings
  • [41] M. Everingham, L. J. V. Gool, C. K. I. Williams, J. M. Winn, and A. Zisserman, “The Pascal visual object classes VOC challenge,” IJCV, vol. 88, no. 2, pp. 303–338, 2010.
    Google ScholarLocate open access versionFindings
  • [42] D. L. Shaul Oron, Aharon Bar-Hillel, and S. Avidan, “Locally orderless tracking,” in Proc. IEEE CVPR, Providence, RI, USA, 2012.
    Google ScholarLocate open access versionFindings
  • [43] J. Kwon and K. M. Lee, “Tracking of a non-rigid object via patch-based dynamic appearance modeling and adaptive basin hopping monte carlo sampling,” in Proc. IEEE CVPR, Miami, FL, USA, 2009.
    Google ScholarLocate open access versionFindings
  • [44] E. Maggio and A. Cavallaro, Video Tracking: Theory and Practice. 1st ed. Oxford, U.K.: Wiley, 2011.
    Google ScholarFindings
  • [45] E. Maggio and A. Cavallaro, “Tracking by sampling trackers,” in Proc. IEEE ICCV, Barcelona, Spain, 2011, pp. 1195–1202.
    Google ScholarLocate open access versionFindings
  • [46] B. Babenko, M.-H. Yang, and S. Belongie, “Visual tracking with online multiple instance learning,” in Proc. IEEE CVPR, Miami, FL, USA, 2009.
    Google ScholarLocate open access versionFindings
  • [47] A. Sanin, C. Sanderson, and B. C. Lovell, “Shadow detection: A survey and comparative evaluation of recent methods,” PR, vol. 45, no. 4, pp. 1684–1695, 2012.
    Google ScholarLocate open access versionFindings
  • [48] A. Amato, M. G. Mozerov, A. D. Bagdanov, and J. Gonzalez, “Accurate moving cast shadow suppression based on local color constancy detection,” IEEE Trans. Image Process., vol. 20, no. 10, pp. 2954–2966, Oct. 2011.
    Google ScholarLocate open access versionFindings
  • [49] K. Briechle and U. D. Hanebeck, “Template matching using fast normalized cross correlation,” in Proc. SPIE, vol. 4387. 2001, pp. 95–102.
    Google ScholarLocate open access versionFindings
  • [50] S. Baker and I. Matthews, “Lucas-Kanade 20 years on: A unifying framework,” IJCV, vol. 56, no. 3, pp. 221–255, 2004.
    Google ScholarLocate open access versionFindings
  • [51] H. T. Nguyen and A. W. M. Smeulders, “Fast occluded object tracking by a robust appearance filter,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no. 8, pp. 1099–1104, Aug. 2004.
    Google ScholarLocate open access versionFindings
  • [52] A. Adam, E. Rivlin, and I. Shimshoni, “Robust fragments-based tracking using the integral histogram,” in Proc. IEEE CVPR, Washington, DC, USA, 2006.
    Google ScholarLocate open access versionFindings
  • [53] D. Comaniciu, V. Ramesh, and P. Meer, “Real-time tracking of non-rigid objects using mean shift,” in Proc. IEEE CVPR, Hilton Head Island, SC, USA, 2000.
    Google ScholarLocate open access versionFindings
  • [54] D. A. Ross, J. Lim, and R. S. Lin, “Incremental learning for robust visual tracking,” IJCV, vol. 77, no. 1–3, pp. 125–141, 2008.
    Google ScholarLocate open access versionFindings
  • [55] M. Isard and A. Blake, “A mixed-state condensation tracker with automatic model-switching,” in Proc. 6th ICCV, Bombay, India, 1998.
    Google ScholarLocate open access versionFindings
  • [56] J. Kwon and F. C. Park, “Visual tracking via geometric particle filtering on the affine group with optimal importance functions,” in Proc. IEEE CVPR, Miami, FL, USA, 2009.
    Google ScholarLocate open access versionFindings
  • [57] L. Cehovin, M. Kristan, and A. Leonardis, “An adaptive coupled-layer visual model for robust visual tracking,” in Proc. IEEE ICCV, Barcelona, Spain, 2011.
    Google ScholarLocate open access versionFindings
  • [58] X. Mei and H. Ling, “Robust visual tracking using L1 minimization,” in Proc. IEEE 12th ICCV, Kyoto, Japan, 2009.
    Google ScholarLocate open access versionFindings
  • [59] X. Mei, H. Ling, Y. Wu, E. Blasch, and L. Bai, “Minimum error bounded efficient l1 tracker with occlusion detection,” in Proc. IEEE CVPR, Providence, RI, USA, 2011.
    Google ScholarLocate open access versionFindings
  • [60] H. T. Nguyen and A. W. M. Smeulders, “Robust track using foreground-background texture discrimination,” IJCV, vol. 68, no. 3, pp. 277–294, 2006.
    Google ScholarLocate open access versionFindings
  • [61] D. M. Chu and A. W. M. Smeulders, “Color invariant surf in discriminative object tracking,” in Proc. IEEE ECCV, Heraklion, Greece, 2010.
    Google ScholarLocate open access versionFindings
  • [62] M. Godec, P. M. Roth, and H. Bischof, “Hough-based tracking of non-rigid objects,” in Proc. IEEE ICCV, Barcelona, Spain, 2011.
    Google ScholarLocate open access versionFindings
  • [63] L. Breiman, “Random forests,” ML, vol. 45, no. 1, pp. 5–32, 2001.
    Google ScholarLocate open access versionFindings
  • [64] D. H. Ballard, “Generalizing the hough transform to detect arbitrary shapes,” PR, vol. 13, no. 2, pp. 111–122, 1981.
    Google ScholarLocate open access versionFindings
  • [65] C. Rother, V. Kolmogorov, and A. Blake, “"GrabCut": Interactive foreground extraction using iterated graph cuts,” ACM Trans. Graphics, vol. 23, no. 3, pp. 309–314, Aug. 2004.
    Google ScholarLocate open access versionFindings
  • [66] S. Wang, H. Lu, F. Yang, and M.-H. Yang, “Superpixel tracking,” in Proc. IEEE ICCV, Barcelona, Spain, 2011.
    Google ScholarLocate open access versionFindings
  • [67] T. G. Dietterich, R. H. Lathrop, and T. Lozano-Pérez, “Solving the multiple instance problem with axis-parallel rectangles,” AI, vol. 89, no. 1–2, pp. 31–71, 1997.
    Google ScholarLocate open access versionFindings
  • [68] Z. Kalal, J. Matas, and K. Mikolajczyk, “Online learning of robust object detectors during unstable tracking,” in Proc. IEEE 12th ICCV, Kyoto, Japan, 2009.
    Google ScholarLocate open access versionFindings
  • [69] M. Ozuysal, P. Fua, and V. Lepetit, “Fast keypoint recognition in ten lines of code,” in Proc. IEEE CVPR, Minneapolis, MN, USA, 2007, pp. 1–8.
    Google ScholarLocate open access versionFindings
  • [70] S. Hare, A. Saffari, and P. H. S. Torr, “Struck: Structured output tracking with kernels,” in Proc. IEEE ICCV, Barcelona, Spain, 2011.
    Google ScholarLocate open access versionFindings
  • [71] J. R. R. Uijlings, A. W. M. Smeulders, and R. J. H. Scha, “What is the spatial extent of an object?” in Proc. IEEE CVPR, Miami, FL, USA, 2009.
    Google ScholarLocate open access versionFindings
  • [72] D. Terzopoulos and R. Szeliski, “Tracking with Kalman snakes,” MIT Press, 1992.
    Google ScholarFindings
  • [73] H. T. Nguyen, M. Worring, R. van den Boomgaard, and A. W. M. Smeulders, “Tracking nonparameterized object contours in video,” IEEE Trans. Image Process., vol. 11, no. 9, pp. 1081–1091, Sept. 2002.
    Google ScholarLocate open access versionFindings
  • [74] X. Zhou, W. Hu, Y. Chen, and W. Hu, “Markov random field modeled level sets method for object tracking with moving cameras,” in Asian Conference on Computer Vision, Y. Yagi, S. Kang, I. Kweon, and H. Zha, Eds. Berlin, Germany: Springer, 2007, pp. 832–842, LNCS 4843.
    Google ScholarFindings
  • [75] A. Senior, “Tracking people with appearance models,” in Proc. Int. Workshop PETS, 2002.
    Google ScholarLocate open access versionFindings
  • [76] R. Vezzani, C. Grana, and R. Cucchiara, “Probabilistic people tracking with appearance models and occlusion classification: The ad-hoc system,” PRL, vol. 32, no. 6, pp. 867–877, 2011.
    Google ScholarLocate open access versionFindings
  • [77] S. Calderara, R. Cucchiara, and A. Prati, “Bayesian-competitive consistent labeling for people surveillance,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 30, no. 2, pp. 354–360, Feb. 2008.
    Google ScholarLocate open access versionFindings
  • [78] G. Shu, A. Dehghan, O. Oreifej, E. Hand, and M. Shah, “Partbased multiple-person tracking with partial occlusion handling,” in Proc. IEEE CVPR, Providence, RI, USA, 2012.
    Google ScholarLocate open access versionFindings
  • [79] D. Ramanan, D. A. Forsyth, and K. Barnard, “Building models of animals from video,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 8, pp. 1319–1334, Aug. 2006.
    Google ScholarLocate open access versionFindings
  • [80] J. M. Geusebroek, R. van den Boomgaard, A. W. M. Smeulders, and H. Geerts, “Color invariance,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 12, pp. 1338–1350, Dec. 2001.
    Google ScholarLocate open access versionFindings
  • [81] J. M. Geusebroek, A. W. M. Smeulders, and J. J. van de Weijer, “Fast anisotropic gauss filtering,” IEEE Trans. Image Process., vol. 12, no. 8, pp. 938–943, Aug. 2003.
    Google ScholarLocate open access versionFindings
  • [82] H. Bay, A. Ess, T. Tuytelaars, and L. van Gool, “Speeded-up robust features (SURF),” CVIU, vol. 110, no. 3, pp. 346–359, 2008.
    Google ScholarLocate open access versionFindings
  • [83] C. R. Wren, A. Azarbayejani, T. Darrell, and A. P. Pentland, “Pfinder: Real-time tracking of the human body,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, no. 7, pp. 780–785, Jul. 1997.
    Google ScholarLocate open access versionFindings
  • [84] D. Koller et al., “Towards robust automatic traffic scene analysis in real-time,” in Proc. IEEE ICPR, Jerusalem, Israel, 1994.
    Google ScholarLocate open access versionFindings
  • [85] C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proc. IEEE CVPR, Fort Collins, CO, USA, 1999.
    Google ScholarLocate open access versionFindings
  • [86] P. W. Power and J. A. Schoonees, “Understanding background mixture models for foreground segmentation,” in Proc. IVCNZ, 2002.
    Google ScholarLocate open access versionFindings
  • [87] R. Cucchiara, C. Grana, M. Piccardi, and A. Prati, “Detecting moving objects, ghosts, and shadows in video streams,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 10, pp. 1337–1342, Oct. 2003.
    Google ScholarLocate open access versionFindings
  • [88] A. Humayun, O. Mac Aodha, and G. J. Brostow, “Learning to find occlusion regions,” in Proc. IEEE CVPR, Providence, RI, USA, 2011.
    Google ScholarLocate open access versionFindings
  • [89] T. B. Dinh, N. Vo, and G. Medioni, “Context tracker: Exploring supporters and distracters in unconstrained environments,” in Proc. IEEE CVPR, 2011.
    Google ScholarLocate open access versionFindings
  • [90] S. Lin, Y. Li, S. Kang, X. Tong, and H.-Y. Shum, “Diffuse-specular separation and depth recovery from image sequences,” in Proc. ECCV, London, U.K., 2002.
    Google ScholarLocate open access versionFindings
  • [91] Z. Qigui and L. Bo, “Search on automatic target tracking based on PTZ system,” in Proc. IEEE IASP, Hubei, China, 2011, pp. 192–195.
    Google ScholarLocate open access versionFindings
  • [92] R. E. Kalman, “A new approach to linear filtering and prediction problem,” J. Basic Eng., vol. 82, no. 1, pp. 34–45, 1960.
    Google ScholarLocate open access versionFindings
  • [93] G. Welch and G. Bishop, “An introduction to the Kalman filter,” Univ. North Carolina, Chapel Hill, NC, USA, Lecture, 2001.
    Google ScholarLocate open access versionFindings
  • [94] B. Ristic, S. Arulampalam, and N. Gordon, Beyond the Kalman Filter: Particle Filters for Tracking Applications. Artech House, 2003.
    Google ScholarFindings
  • [95] S. Ali and M. Shah, “Floor fields for tracking in high density crowd scenes,” in Proc. 10th ECCV, Marseille, France, 2008.
    Google ScholarLocate open access versionFindings
  • [96] M. Rodriguez, S. Ali, and T. Kanade, “Tracking in unstructured crowded scenes,” in Proc. IEEE 12th ICCV, Kyoto, Japan, 2009.
    Google ScholarLocate open access versionFindings
  • [97] X. Song, X. Shao, H. Zhao, J. Cui, R. Shibasaki, and H. Zha, “An online approach: Learning-semantic-scene-by-tracking and tracking-by-learning-semantic-scene,” in Proc. IEEE CVPR, San Francisco, CA, USA, 2010.
    Google ScholarLocate open access versionFindings
  • [98] D. Baltieri, R. Vezzani, and R. Cucchiara, “People orientation recognition by mixtures of wrapped distributions on random trees,” in Proc. 12th ECCV, Florence, Italy, 2012.
    Google ScholarLocate open access versionFindings
  • [99] “Multiple-shot person re-identification by chromatic and epitomic analyses,” PRL, vol. 33, no. 7, pp. 898–903, 2012.
    Google ScholarLocate open access versionFindings
  • [100] D. Coppi, S. Calderara, and R. Cucchiara, “Appearance tracking by transduction in surveillance scenarios,” in Proc. 8th IEEE AVSS, Klagenfurt, Austria, 2011.
    Google ScholarLocate open access versionFindings
  • [101] B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proc. IJCAI, vol.
    Google ScholarLocate open access versionFindings
  • 3. Vancouver, BC, Canada, 1981, pp. 674–679.
    Google ScholarFindings
  • F. Porikli, O. Tuzel, and P. Meer, “Covariance tracking using model update based on Lie algebra,” in Proc. IEEE CVPR, Washington, DC, USA, 2006.
    Google ScholarLocate open access versionFindings
  • Y. Wu, J. Cheng, J. Wang, and H. Lu, “Real-time visual tracking via incremental covariance tensor learning,” in Proc. IEEE 12th ICCV, Kyoto, Japan, 2009.
    Google ScholarLocate open access versionFindings
  • S. Avidan, “Support vector tracking,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no. 8, pp. 1064–1072, Aug. 2004.
    Google ScholarLocate open access versionFindings
  • E. Maggio, E. Piccardo, C. Regazzoni, and A. Cavallaro, “Particle PHD filtering for multi-target visual tracking,” in Proc. ICASSP, vol.
    Google ScholarLocate open access versionFindings
  • 1. Honolulu, HI, USA, 2007, pp. 1101–1104.
    Google ScholarFindings
  • [106] A. Ellis, A. Shahrokni, and J. Ferryman, “Overall evaluation of the PETS 2009 results,” in Proc. IEEE Int. Workshop PETS, vol. 7. 2009.
    Google ScholarLocate open access versionFindings
  • [107] B. Benfold and I. Reid, “Stable multi-target tracking in real-time surveillance video,” in Proc. IEEE CVPR, Providence, RI, USA, 2011.
    Google ScholarLocate open access versionFindings
  • [108] J. F. Lawless, Statistical Models and Methods for Lifetime Data. Hoboken, NJ, USA: Wiley, 2003.
    Google ScholarFindings
  • [109] D. Collett, Modelling Survial Data in Medical Research. Boca Raton, FL, USA: Chapman Hall, 2003.
    Google ScholarFindings
  • [110] E. L. Kaplan and P. Meier, “Nonparametric estimation from incomplete observations,” J. Amer. Statist. Assoc., vol. 53, no. 282, pp. 457–481, 1958.
    Google ScholarLocate open access versionFindings
  • [111] N. Mantel, “Evaluation of survival data and two new rank order statistics arising in its consideration,” Cancer Chemother. Rep., vol. 50, no. 3, pp. 163-170, Mar. 1966.
    Google ScholarLocate open access versionFindings
  • [112] D. G. Kleinbaum and M. Klein, “Kaplan-Meier survival curves and the log-rank test,” in Survival Analysis. New York, NY, USA: Springer, 2012, pp. 55–96.
    Google ScholarFindings
  • [113] F. E. Grubbs, “Procedures for detecting outlying observations in samples,” Technometrics, vol. 11, no. 1, pp. 1–21, 1969.
    Google ScholarLocate open access versionFindings
  • [114] K. E. A. van de Sande, J. R. R. Uijlings, T. Gevers, and A. W. M. Smeulders, “Segmentation as selective search for object recognition,” in Proc. ICCV, Barcelona, Spain, 2011. Dung M. Chu received the master’s degree in computer science from the University of Amsterdam, Amsterdam, The Netherlands, in 2008. He is pursuing the Ph.D. degree with the Intelligent Systems Lab Amsterdam, University of Amsterdam. His current research interests include video understanding, object recognition, and object tracking.
    Google ScholarLocate open access versionFindings
  • Afshin Dehghan received the B.S. degree in electrical engineering from the University of Tehran, Tehran, Iran, in 2011. He is currently pursuing the Ph.D. degree at UCF’s Center for Research in Computer Vision (CRCV). He has authored several papers published in conference such as CVPR and ECCV. His current research interests include object tracking, object detection, event recognition, and face verification.
    Google ScholarLocate open access versionFindings
  • For more information on this or any other computing topic, please visit our Digital Library at www.computer.org/publications/dlib.
    Locate open access versionFindings
您的评分 :
0

 

标签
评论
小科