Chrome Extension
WeChat Mini Program
Use on ChatGLM

Deep Neural Networks Regularization Using a Combination of Sparsity Inducing Feature Selection Methods

Neural processing letters/Neural Processing Letters(2021)

Cited 3|Views19
No score
Abstract
Deep learning is an important subcategory of machine learning approaches in which there is a hope of replacing man-made features with fully automatic extracted features. However, in deep learning, we are generally facing a very high dimensional feature space. This may lead to overfitting problem which is tried to be prevented by applying regularization techniques. In this framework, the sparse representation based feature selection and regularization methods are very attractive. This is because of the nature of the sparse methods which represent a data with as less as possible non-zero coefficients. In this paper, we utilize a variety of sparse representation based methods for regularizing of deep neural networks. For this purpose, first, the effects of three basic sparsity inducing methods are studied. These are the Least Square Regression, Sparse Group Lasso (SGL) and Correntropy inducing Robust Feature Selection (CRFS) methods. Then, in order to improve the regularization process, three combinations of the basic methods are proposed. This study is performed considering a simple fully connected deep neural network and a VGG-like network. Our experimental results show that, overall, the combined methods outperform the basic ones. Considering two important factors of the amount of induced sparsity and classification accuracy, the combination of the CRFS and SGL methods leads to very successful results in deep neural network.
More
Translated text
Key words
Sparse feature selection,Lasso,Regularization,Deep learning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined