Cross-modality Deep Feature Matching Network for Visible-Infrared Person Re-identification

Jinhua Jiang,Wenfeng Zhang

2023 2nd International Conference on Computing, Communication, Perception and Quantum Technology (CCPQT)(2023)

引用 0|浏览3
暂无评分
摘要
Visible-infrared person re-identification (VIReID) aims to match pedestrian images between the visible and infrared modalities. Existing methods focus on using modal transformation approaches to alleviate modality differences. However, they have not taken into account the relationship of local information across modalities. Therefore, we propose a Cross-modality Deep Feature Matching Network (CDFMN) for VIReID by adopting a feature matching method. CDFMN effectively handles both modality-specific and modality-shared information by leveraging modal invariant features. We introduce novel losses to capture modality-specific features and establish cross-modal correspondences. Additionally, we employ deep feature matching to measure the similarity between image pairs. This pioneering work applies feature matching to VIReID, demonstrating its effectiveness in addressing the challenges of cross-modal matching.
更多
查看译文
关键词
feature matching method,visible-infrared person re-identification (VIReID),modality-specific information,modality shared information
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要