Cross-Domain Attention and Center Loss for Sketch Re-Identification

IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY(2022)

引用 3|浏览23
暂无评分
摘要
Matching all RGB photos of the target person in the gallery database with the full-body sketch image drawn by the professional is defined as Sketch re-identification (Sketch Re-id). The big gap between the sketch domain and RGB domain makes Sketch Re-id challenging. This paper addresses the problem by proposing a new framework to obtain domain-invariant features, which uses CNN as the backbone. To make the model focus more on the regions related to the sketch image in the RGB photo, we propose a novel cross-domain attention (CDA) mechanism. It uses different ways of splitting feature maps in its two branches and calculates the relationship between different parts in the sketch images and RGB photos. Moreover, we designed the cross-domain center loss (CDC), which breaks through the limitations that datasets need to be in the same domain in the traditional center loss. It effectively reduces the gap between two domains and makes the features with the same ID closer. The experiment is performed on the Sketch Re-id dataset. Each person has one sketch image and two RGB photos. To evaluate the generalization, we also experimented on two popular sketch-photo face datasets. The result in the Sketch Re-id dataset shows the model performs 3.7% higher than the previous methods. And the result in the CUHK student dataset performs 0.38% higher than the state-of-the-art methods.
更多
查看译文
关键词
Transformers, Task analysis, Face recognition, Feature extraction, Representation learning, Image color analysis, Cameras, Sketch re-identification, cross-domain attention, domain-invariant feature, center loss
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要