Video-Based Face Association and Identification

2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017)(2017)

引用 7|浏览27
暂无评分
摘要
In this paper, we present a new video-based face identification algorithm, where the target (i.e., person of interest) in the probe video is only annotated once with a face bounding box in a frame and the video may consist of multiple shots. Most video face identification techniques assume that the video is of single shot, and thus the bounding boxes of the target face can be extracted by tracking a face across the video frames. Nevertheless, such automatic annotation is vulnerable to the drifting of the face tracker, and the face tracking algorithm is inadequate to associate the face images of the target across multiple shots. In this paper, we propose a target face association (TFA) technique that retrieves a set of representative face images in a given video that are likely to have the same identity as the target face. These face images are then utilized to construct a robust face representation of the target face for searching the corresponding subject in the gallery. Since two faces that appear in the same video frame cannot belong to the same person, such cannot-link constraints are utilized for learning a target-specific linear classifier for establishing the intra/inter-shot face association of the target. Experimental results on the newly released JANUS challenge set 3 (JANUS CS3) dataset show that our method generates robust representations from target-annotated videos and demonstrates good performance for the task of video-based face identification problem.
更多
查看译文
关键词
video-based face association,face bounding box,video frames,face tracking algorithm,face images,target face association,TFA,robust face representation,target-specific linear classifier,intra-inter-shot face association,JANUS challenge set-3,JANUS CS3,target-annotated videos,video-based face identification problem
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要