A Robust Sift-Based Descriptor for Video Classification
Proceedings of SPIE, the International Society for Optical Engineering/Proceedings of SPIE(2015)
Abstract
Voluminous amount of videos in today's world has made the subject of objective (or semi-objective) classification of videos to be very popular. Among the various descriptors used for video classification, SIFT and LIFT can lead to highly accurate classifiers. But, SIFT descriptor does not consider video motion and LIFT is time-consuming. In this paper, a robust descriptor for semi-supervised classification based on video content is proposed. It holds the benefits of LIFT and SIFT descriptors and overcomes their shortcomings to some extent. For extracting this descriptor, the SIFT descriptor is first used and the motion of the extracted keypoints are then employed to improve the accuracy of the subsequent classification stage. As SIFT descriptor is scale invariant, the proposed method is also robust toward zooming. Also, using the global motion of keypoints in videos helps to neglect the local motions caused during video capturing by the cameraman. In comparison to other works that consider the motion and mobility of videos, the proposed descriptor requires less computations. Obtained results on the TRECVIT 2006 dataset show that the proposed method achieves more accurate results in comparison with SIFT in content-based video classifications by about 15 percent.
MoreTranslated text
Key words
Robust Video Descriptor,SIFT,Video Classification,LIFT
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined