Learning Consistent Feature Representation for Cross-Modal Multimedia Retrieval

IEEE Transactions on Multimedia(2015)

引用 233|浏览80
暂无评分
摘要
The cross-modal feature matching has gained much attention in recent years, which has many practical applications, such as the text-to-image retrieval. The most difficult problem of cross-modal matching is how to eliminate the heterogeneity between modalities. The existing methods (e.g., CCA and PLS) try to learn a common latent subspace, where the heterogeneity between two modalities is minimized so that cross-matching is possible. However, most of these methods require fully paired samples and suffer difficulties when dealing with unpaired data. Besides, utilizing the class label information has been found as a good way to reduce the semantic gap between the low-level image features and high-level document descriptions. Considering this, we propose a novel and effective supervised algorithm, which can also deal with the unpaired data. In the proposed formulation, the basis matrices of different modalities are jointly learned based on the training samples. Moreover, a local group-based priori is proposed in the formulation to make a better use of popular block based features (e.g., HOG and GIST). Extensive experiments are conducted on four public databases: Pascal VOC2007, LabelMe, Wikipedia, and NUS-WIDE. We also evaluated the proposed algorithm with unpaired data. By comparing with existing state-of-the-art algorithms, the results show that the proposed algorithm is more robust and achieves the best performance, which outperforms the second best algorithm by about 5% on both the Pascal VOC2007 and NUS-WIDE databases.
更多
查看译文
关键词
cross-modal feature matching,nus-wide database,labelme database,image representation,latent subspace learning,local group-based priori,image matching,class label information,cross-modal matching,learning (artificial intelligence),low-level image features,text-to-image retrieval,multimedia,pascal voc2007 database,retrieval,documents and images,wikipedia database,modality heterogeneity,feature extraction,high-level document description,image retrieval,feature representation learning,cross-modal multimedia retrieval,block based features,supervised learning algorithm,algorithm design and analysis,learning artificial intelligence,vectors,face recognition,semantics,correlation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要