Domain-aware Stacked AutoEncoders for zero-shot learning.

Neurocomputing(2021)

引用 5|浏览7
暂无评分
摘要
Zero-shot learning (ZSL), which focuses on transferring the knowledge from the seen (source) classes to unseen (target) ones, is getting more and more attention in the computer vision community. However, there often has a large domain gap between the source and target classes, resulting in the projection domain shift problem. To this end, we propose a novel model, named Domain-aware Stacked AutoEncoders (DaSAE), that consists of two interactive stacked auto-encoders to learn the domain-aware projections for adapting source and target domains respectively. In each of them, the first-layer encoder aims to project a visual feature vector into the semantic space, and the second-layer encoder connects the semantic description of a sample with its label directly. Meanwhile, the two-layer decoders seek to reconstruct the visual representation from the label information and semantic description successively. Moreover, the manifold regularization that explores the manifold structure residing in the target data is integrated to the basic DaAE, which further improves the generalization ability of our model. Extensive experiments on the benchmark datasets clearly demonstrate that our DaSAE outperforms the state-of-the-art alternatives by the significant margins.
更多
查看译文
关键词
Domain-aware,Stacked AutoEncoders,Zero-shot learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要