Srnn: Self-Regularized Neural Network

NEUROCOMPUTING(2018)

引用 5|浏览3
暂无评分
摘要
In this work, we address to boost the discriminative capability of deep neural network by alleviating the over-fitting problem. Previous works often deal with the problem of learning a neural network by optimizing one or more objective functions with some existing regularization methods (such as dropout, weight decay, stochastic pooling, data augmentation, etc.). We argue that these approaches may be difficult to further improve the classification performance of a neural network, due to not well employing its own learned knowledge. In this paper, we introduce a self-regularized strategy for learning a neural network, named as a Self-Regularized Neural Network (SRNN). The intuition behind the SRNN is that the sample-wise soft targets of a neural network may have potentials to drag its own neural network out of its local optimum. More specifically, an initial neural network is firstly pre-trained by optimizing one or more objective functions with ground truth labels. We then gradually mine sample-wise soft targets, which enables to reveal the correlation/similarity among classes predicted from its own neural network. The parameters of neural network are further updated for fitting its sample-wise soft targets. This self-regularization learning procedure minimizes the objective function by integrating the sample-wise soft targets of neural network and the ground truth label of training samples. Three characteristics in this SRNN are summarized as: (1) gradually mining the learned knowledge from a single neural network, and then correcting and enhancing this part of learned knowledge, resulting in the sample-wise soft targets; (2) regularly optimizing the parameters of this neural network with their sample-wise soft targets; (3) boosting the discriminative capability of a neural network with the self-regularization strategy. Extensive experiments on four public datasets, i.e., CIFAR-10, CIFAR-100, Caltech101 and MIT, well demonstrate the effectiveness of the proposed SRNN for image classification. (C) 2017 Elsevier B.V. All rights reserved.
更多
查看译文
关键词
Self-regularized learning,Sample-wise soft targets,Neural network,Image classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要