TAFSSL: Task-Adaptive Feature Sub-Space Learning for Few-Shot Classification

European Conference on Computer Vision(2020)

引用 81|浏览209
暂无评分
摘要
Recently, Few-Shot Learning (FSL), or learning from very few (typically 1 or 5) examples per novel class (unseen during training), has received a lot of attention and significant performance advances. While number of techniques have been proposed for FSL, several factors have emerged as most important for FSL performance, awarding SOTA even to the simplest of techniques. These are: the backbone architecture (bigger is better), type of pre-training (meta-training vs multi-class), quantity and diversity of the base classes (the more the merrier), and using auxiliary self-supervised tasks (a proxy for increasing the diversity). In this paper we propose TAFSSL, a simple technique for improving the few shot performance in cases when some additional unlabeled data accompanies the few-shot task. TAFSSL is built upon the intuition of reducing the feature and sampling noise inherent to few-shot tasks comprised of novel classes unseen during pre-training. Specifically, we show that on the challenging miniImageNet and tieredImageNet benchmarks, TAFSSL can improve the current state-of-the-art in both transductive and semi-supervised FSL settings by more than \(5\%\), while increasing the benefit of using unlabeled data in FSL to above \(10\%\) performance gain.
更多
查看译文
关键词
classification,learning,feature,task-adaptive,sub-space,few-shot
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要