Similarity Preserving Feature Generating Networks for Zero-Shot Learning

Neurocomputing(2020)

引用 11|浏览62
暂无评分
摘要
Traditional zero-shot learning methods usually solve the problem through visual semantic embedding. Recently, considering the power of generating adversarial network, some methods propose to use GAN to synthesize visual representation by using class attribute or semantic embedding. In this paper, we propose a novel method, named similarity preserving GAN (SPGAN), which can generate unseen visual features from random noises concatenating with semantic descriptions. Specifically, we train a conditional Wasserstein GAN that takes semantic descriptions about an unseen class as input and generates synthesized visual features for this class. For the GAN-based approaches, we hope that the generator can generate as real features as possible. However, the unconstrained training process makes the sample generated by the model deviate greatly from the real sample. To avoid this problem, we propose a similarity preserving loss to regularize the generative network, which helps to minimize the distance between synthesized samples and real samples. Furthermore, we use an ensemble method in the test stage which integrates nearest neighbor classifier and linear softmax classifier. Concretely, in the first stage, we use the nearest neighbor method to pick out the real features that are highly similar to the generated ones, then combine the selected real features with generated features to train the final softmax classifier. Experiments on four popular datasets show that our method can surpass all the most state-of-the-art methods.
更多
查看译文
关键词
Zero-shot learning,Generative adversarial network,Similarity preserving
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要