Cross-Domain Visual Attention Model Adaption with One-Shot GAN

2020 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR)(2020)

引用 0|浏览33
暂无评分
摘要
The state-of-the-art models for visual attention prediction perform well in common images. But in general, these models have a performance degradation when applied to another domain with conspicuous data distribution differences, such as solar images in this work. To address this issue and adopt these models from the common images to the sun, this paper proposes a new dataset, named VASUN, that records the free-viewing human attention on solar images. Based on this dataset, we propose a new cross-domain model adaption approach, which is a siamese feature extraction network with two discriminators and trained in a one-shot learning manner, to bridge the gaps between the source domain and target domain through the joint distribution space. Finally, we benchmark existing models as well as our work on VASUN and give some analysis about predicting visual attention on the sun. The results show that our method achieves state-of-the-art performance with only one labeled image in the target domain and contributes to the domain adaption task.
更多
查看译文
关键词
siamese feature extraction network,one-shot learning,source domain,target domain,joint distribution space,Sun,cross-domain visual attention model adaption,one-shot GAN,visual attention prediction,free-viewing human attention,VASUN
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要