Contrastive Learning对比学习是一种为深度学习模型描述相似和不同事物的任务的方法。利用这种方法,可以训练机器学习模型来区分相似和不同的图像。
ICML, pp.1597-1607, (2020)
This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the...
Cited by309BibtexViews337DOI
0
0
Recent studies on unsupervised representation learning from images are converging on a central concept known as contrastive learning
Cited by45BibtexViews653DOI
0
0
NeurIPS, (2020)
This opens the possibility of applications in semisupervised learning which can leverage the benefits of a single loss that can smoothly shift behavior based on the availability of labeled data
Cited by25BibtexViews596DOI
0
0
This paper proposed Prototypical Contrastive Learning, a generic unsupervised representation learning framework that finds network parameters to maximize the log-likelihood of the observed data
Cited by15BibtexViews167DOI
0
0
ICLR, (2020)
We have developed a novel technique for neural network distillation, using the concept of contrastive objectives, which are usually used for representation learning
Cited by7BibtexViews667DOI
0
0
Inoue Nakamasa, Goto Keita
We showed via experiments on the VoxCeleb dataset that the proposed generalized contrastive loss enables a network to learn speaker embeddings in three manners, namely, supervised learning, semi-supervised learning, and unsupervised learning
Cited by0BibtexViews56DOI
0
0
Phuc H. Le-Khac,Graham Healy,Alan F. Smeaton
IEEE Access, (2020): 193907-193934
Contrastive Learning has recently received interest due to its success in self-supervised representation learning in the computer vision domain. However, the origins of Contrastive Learning date as far back as the 1990s and its development has spanned across many fields and domai...
Cited by0BibtexViews43DOI
0
0
Giorgi John M., Nitski Osvald, Bader Gary D., Wang Bo
Our method sometimes underperforms existing supervised solutions on average downstream performance, we found that this is partially explained by the fact that these methods are trained on the Stanford NLI dataset corpus, which is included as a downstream evaluation task in SentEv...
Cited by0BibtexViews53DOI
0
0
ICML 2020, (2020)
We introduce a self-supervised approach for learning node and graph level representations by contrasting structural views of graphs
Cited by0BibtexViews82DOI
0
0
Visual algorithms like Instance Discrimination and Local Aggregation no longer look very different from masked language modeling, as both families are unified under mutual information
Cited by0BibtexViews104DOI
0
0
NeurIPS, (2020)
One color space learned from RGB happens to touch the sweet spot, but in general the INCE between views is overly decreased. e reverse-U shape trend holds for both non-volume preserving and volume preserving models
Cited by0BibtexViews163DOI
0
0
Udandarao Vishaal,Maiti Abhishek, Srivatsav Deepak, Vyalla Suryatej Reddy,Yin Yifang,Shah Rajiv Ratn
We believe this is due to the fact that current crossmodal representation systems regularize the distance of pairs of representations of those data samples which belong to the same classes but not of pairs of representations belonging to different classes
Cited by0BibtexViews50DOI
0
0
CVPR, pp.9726-9735, (2019)
Momentum Contrast is on par on Cityscapes instance segmentation, and lags behind on VOC semantic segmentation; we show another comparable case on iNaturalist in appendix
Cited by228BibtexViews567DOI
0
0
Olivier J. Hénaff,Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, S. M. Ali Eslami, Aaron van den Oord
arXiv preprint arXiv:1905.09272, (2019)
Deep neural networks excel at perceptual tasks when labeled data are abundant, yet their performance degrades substantially when provided with limited supervision
Cited by181BibtexViews64DOI
0
0
CVPR, (2019): 4893-4902
We proposed Contrastive Adaptation Network to perform class-aware alignment for Unsupervised Domain Adaptation
Cited by76BibtexViews153DOI
0
0
CoRR, (2019)
We show connections to mutual information maximization and extend it to scenarios including more than two views
Cited by75BibtexViews98DOI
0
0
arXiv: Learning, (2018)
In this paper we presented Contrastive Predictive Coding, a framework for extracting compact latent representations to encode predictions over future observations
Cited by501BibtexViews111DOI
0
0
R. Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal,Adam Trischler,Yoshua Bengio
arXiv: Machine Learning, (2018)
Our results show that infoNCE tends to perform best, but differences between infoNCE and Jensen-Shannon divergence diminish with larger datasets
Cited by400BibtexViews95DOI
0
0
Pierre Sermanet, Corey Lynch,Yevgen Chebotar, Jasmine Hsu, Eric Jang,Stefan Schaal,Sergey Levine, Google Brain
ICRA, pp.1134-1141, (2018)
Robots in the real world would be capable of two things: learning the relevant attributes of an object interaction task purely from observation, and understanding how human poses and
Cited by165BibtexViews226DOI
0
0
computer vision and pattern recognition, (2018)
We present an unsupervised feature learning approach by maximizing distinction between instances via a novel nonparametric softmax formulation
Cited by11BibtexViews123DOI
0
0
No data, please see others