ClusterFit: Improving Generalization of Visual Representations
CVPR, pp. 6508-6517, 2019.
Summary: We demonstrate that the misalignment between pre-training and transfer tasks due to the high levels of noise in the web data or the non-semantic nature of the selfsupervised pretext tasks leads to a less-generalizable feature space
Pre-training convolutional neural networks with weakly-supervised and self-supervised strategies is becoming increasingly popular for several computer vision tasks. However, due to the lack of strong discriminative signals, these learned representations may overfit to the pre-training objective (e.g., hashtag prediction) and not general...More
PPT (Upload PPT)