Transfer learning in CV在某些计算机视觉领域如生物信息,由于其数据获取和数据标注都需要进行大量的临床试验,因此很难构建大规模带有标注的高质量数据集,从而限制了它的发展。为此,有人提出了迁移学习,这放松了数据获取的假设:只要求训练数据必须独立且与测试数据相同分布,这促使我们可以使用迁移学习来解决训练数据不足的问题。迁移学习就像在讲诉一个站在巨人肩膀的故事。随着越来越多的深度学习应用场景的出现,人们不可避免会去想,如何利用已训练的模型去完成相类似的任务。
Raffel Colin,Shazeer Noam, Roberts Adam,Lee Katherine, Narang Sharan, Matena Michael, Zhou Yanqi,Li Wei,Liu Peter J.
JOURNAL OF MACHINE LEARNING RESEARCH, no. 140 (2020): 1-67
While many modern approaches to transfer learning for natural language processing use a Transformer architecture consisting of only a single “stack”, we found that using a standard encoder-decoder structure achieved good results on both generative and classification tasks
Cited by866BibtexViews378
0
0
CVPR, (2019): 4805-4814
Our approach could be applied to different deep neural network architectures, in the following we focus on a Residual Network model
Cited by83BibtexViews146
0
0
CVPR, pp.3877-3886, (2019)
This paper firstly proposes a semi-supervised learning paradigm toward this task
Cited by70BibtexViews49
0
0
Carmelo Sferrazza,Raffaello D'Andrea
arXiv: Robotics, (2019): 7961-7967
The results show that the sensor can reconstruct the normal force distribution applied with a test indenter after being trained on an automatically collected dataset
Cited by16BibtexViews29DOI
0
0
Paul N. Whatmough, Chuteng Zhou, Patrick Hansen, Shreyas K. Venkataramanaiah,Jae-sun Seo,Matthew Mattina
arXiv: Computer Vision and Pattern Recognition, (2019)
We considered a suite of six image classification problems, and found we can train models using transfer learning with an accuracy loss of < 1%, and achieving up to 11.2 TOPS/W, which is nearly 2× more efficient than a conventional programmable convolutional neural network accele...
Cited by12BibtexViews34
0
0
CVPR, pp.12387-12396, (2019)
We introduce Representation Similarity Analysis as a tool to quantify the relationship between deep neural networks and its application in transfer learning for model selection
Cited by12BibtexViews26
0
0
Chuanqi Tan,Fuchun Sun,Tao Kong, Wenchang Zhang,Chao Yang, Chunfang Liu
ICANN, (2018)
We have review and category current researches of deep transfer learning
Cited by575BibtexViews234
0
0
international joint conference on artificial intelligence, (2018): 3712-3722
This is the basis of our approach: we computes an affinity matrix among tasks based on whether the solution for one task can be sufficiently read out of the representation trained for another task. Such transfers are exhaustively sampled, and a Binary Integer Programming formulat...
Cited by463BibtexViews440DOI
0
0
computer vision and pattern recognition, (2018)
It achieves considerable accuracy gains on tasks with large-scale source domain and target domain, e.g. I 1000 → C 84. These results suggest that Selective Adversarial Network can learn transferable features for partial transfer learning in all the tasks under the setting where t...
Cited by179BibtexViews79DOI
0
0
international conference on machine learning, (2017)
We present joint adaptation networks, which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy criterion
Cited by892BibtexViews182
0
0
CVPR, (2017)
We introduce a deep transfer learning scheme, called selective joint fine-tuning, for improving the performance of deep learning tasks with insufficient training data
Cited by81BibtexViews18
0
0
ICCV, pp.2200-2207, (2013)
We propose a Joint Distribution Adaptation approach for robust transfer learning
Cited by883BibtexViews55DOI
0
0
ICCV, pp.3208-3215, (2013)
This paper presents a generative Bayesian transfer learning algorithm well-suited for the face verification problem
Cited by198BibtexViews44DOI
0
0