How Useful Is Self-Supervised Pretraining For Visual Tasks?

2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)(2020)

引用 116|浏览371
暂无评分
摘要
Recent advances have spurred incredible progress in self-supervised pretraining for vision. We investigate what factors may play a role in the utility of these pretraining methods for practitioners. To do this, we evaluate various self-supervised algorithms across a comprehensive array of synthetic datasets and downstream tasks. We prepare a suite of synthetic data that enables an endless supply of annotated images as well as MI control over dataset difficulty. Our experiments offer insights into how the utility of self-supervision changes as the number of available labels grows as well as how the utility changes as a function of the downstream task and the properties of the training data. We also find that linear evaluation does not correlate with finetuning performance. Code and data is available at github.com/princeton-vl/selfstudy.
更多
查看译文
关键词
synthetic data,dataset difficulty,self-supervision changes,utility changes,downstream task,self-supervised pretraining,visual tasks,pretraining methods,self-supervised algorithms,comprehensive array,synthetic datasets,downstream tasks,linear evaluation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要