Zero-Shot Learning via Semantic Similarity Embedding

2015 IEEE International Conference on Computer Vision (ICCV)(2015)

引用 737|浏览185
暂无评分
摘要
In this paper we consider a version of the zero-shot learning problem where seen class source and target domain data are provided. The goal during test-time is to accurately predict the class label of an unseen target domain instance based on revealed source domain side information (\eg attributes) for unseen classes. Our method is based on viewing each source or target data as a mixture of seen class proportions and we postulate that the mixture patterns have to be similar if the two instances belong to the same unseen class. This perspective leads us to learning source/target embedding functions that map an arbitrary source/target domain data into a same semantic space where similarity can be readily measured. We develop a max-margin framework to learn these similarity functions and jointly optimize parameters by means of cross validation. Our test results are compelling, leading to significant improvement in terms of accuracy on most benchmark datasets for zero-shot recognition.
更多
查看译文
关键词
semantic similarity embedding,zero-shot learning problem,seen class source,target domain data,class label prediction,unseen target domain instance,source domain side information,unseen classes,seen class proportions,mixture patterns,source/target embedding functions,semantic space,similarity measure,max-margin framework,similarity functions,parameters optimization,cross validation,zero-shot recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要