Adversarial Learning-Based Semantic Correlation Representation for Cross-Modal Retrieval

IEEE MultiMedia(2020)

引用 14|浏览123
暂无评分
摘要
Cross-modal retrieval has become a hot issue in past years. Many existing works pay attentions on correlation learning to generate a common subspace for cross-modal correlation measurement, and others use adversarial learning technique to abate the heterogeneity of multimodal data. However, very few works combine correlation learning and adversarial learning to bridge the intermodal semantic gap and diminish cross-modal heterogeneity. This article proposes a novel cross-modal retrieval method, named Adversarial Learning based Semantic COrrelation Representation (ALSCOR), which is an end-to-end framework to integrate cross-modal representation learning, correlation learning, and adversarial. Canonical correlation analysis model, combined with VisNet and TxtNet, is proposed to capture cross-modal nonlinear correlation. Besides, intramodal classifier and modality classifier are used to learn intramodal discrimination and minimize the intermodal heterogeneity. Comprehensive experiments are conducted on three benchmark datasets. The results demonstrate that the proposed ALSCOR has better performance than the state of the arts.
更多
查看译文
关键词
adversarial learning-based semantic correlation representation,cross-modal correlation measurement,adversarial learning technique,intermodal semantic gap,cross-modal heterogeneity,cross-modal retrieval method,cross-modal representation learning,canonical correlation analysis model,cross-modal nonlinear correlation,intramodal classifier,modality classifier
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要