Matching a composite sketch to a photographed face using fused HOG and deep feature models

The Visual Computer(2020)

引用 11|浏览27
暂无评分
摘要
In this paper, we focus on the research of matching a computer-generated composite face sketch to a photograph. This is of great importance in the field of criminal investigation. To blend the different facial representation modalities, we propose a robust feature model by combining pixel-level features extracted from multi-scale key face patches and high-level features learned from a pre-trained deep learning-based model. At first, texture features are captured by a two-level histogram of oriented gradient descriptor, considering both the overall structure and local details. The semantic-level facial characteristics are analyzed through the high-level features of the Visual Geometry Group-Face (VGG-Face) network. Next, feature similarities between each sketch/photograph pair are measured by feature distance. Then, adaptive weights are assigned to each feature similarity, and score level fused according to their visual saliency contribution. Finally, the fused feature similarity is evaluated for matching purposes. After experimenting on the Pattern Recognition and Image Processing-Viewed Software-Generated Composite (PRIP-VSGC) database and the expanded University of Malta Composite Face Sketch (UoM-SGFS) database, it is found that this framework could achieve more satisfying results compared to the existing methods.
更多
查看译文
关键词
Composite sketch, HOG feature, VGG-face feature, Adaptive feature weight
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要