Transfer Learning with Manifold Regularized Convolutional Neural Network.

Lecture Notes in Artificial Intelligence(2017)

引用 6|浏览32
暂无评分
摘要
Deep learning has been recently proposed to learn robust representation for various tasks and deliver state-of-the-art performance in the past few years. Most researchers attribute such success to the substantially increased depth of deep learning models. However, training a deep model is time-consuming and need huge amount of data. Though techniques like fine-tuning can ease those pains, the generalization performance drops significantly in transfer learning setting with little or without target domain data. Since the representation in higher layers must transition from general to specific eventually, generalization performance degrades without integrating sufficient label information of target domain. To address such problem, we propose a transfer learning framework called manifold regularized convolutional neural networks (MRCNN). Specifically, MRCNN fine-tunes a very deep convolutional neural network on source domain, and simultaneously tries to preserve the manifold structure of target domain. Extensive experiments demonstrate the effectiveness of MRCNN compared to several state-of-the-art baselines.
更多
查看译文
关键词
Transfer learning,Convolutional neural network,Manifold learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要