The remarkable success of ResNets is mainly due to the shortcut connection mechanism, which makes the training of a deeper network possible, where gradients can directly flow through building blocks and the gradient vanishing problem can be avoided in some sense
We have shown that a learner built on top of a high-quality deep CNN can have remarkable accuracy, and that a learner built upon an entire library of CNNs can significantly outperform a few-shot learner built upon any one deep CNN
We briefly introduced several deep models that are often used to classify Hyperspectral image, including stacked auto-encoders, deep belief networks, convolutional neural networks, recurrent neural networks, and generative adversarial networks
Through additional experiments we find that the difference in active learning performance can be explained by a combination of decreased model capacity and lower diversity of Monte Carlo Dropout ensembles
One characteristic that sets humans apart from modern learning-based computer vision algorithms is the ability to acquire knowledge about the world and use that knowledge to reason about the visual world. Humans can learn about the characteristics of objects and the relationships...
Image classification has advanced significantly in recent years with the availability of large-scale image sets. However, fine-grained classification remains a major challenge due to the annotation cost of large numbers of fine-grained categories. This project shows that compel...
In the few-shots setting, we showed improvements with respect to the Web-Scale Annotation By Image Embedding algorithm, which learns the label embedding from labeled data but does not leverage prior information
This is not an easy problem, as we have found that naive approaches such as concatenating the GPS coordinates into the classifier, or leveraging nearby images as a Bayesian prior result in almost no gain in performance