ClusterFit: Improving Generalization of Visual Representations

CVPR, pp. 6508-6517, 2019.

Cited by: 18|Views66
EI
Weibo:
Summary: We demonstrate that the misalignment between pre-training and transfer tasks due to the high levels of noise in the web data or the non-semantic nature of the selfsupervised pretext tasks leads to a less-generalizable feature space

Abstract:

Pre-training convolutional neural networks with weakly-supervised and self-supervised strategies is becoming increasingly popular for several computer vision tasks. However, due to the lack of strong discriminative signals, these learned representations may overfit to the pre-training objective (e.g., hashtag prediction) and not general...More

Code:

Data:

0
Your rating :
0

 

Tags
Comments