Deep belief networks with self-adaptive sparsity

APPLIED INTELLIGENCE(2021)

引用 5|浏览18
暂无评分
摘要
To have the sparsity of deep neural networks is crucial, which can improve the learning ability of them, especially for application to high-dimensional data with small sample size. Commonly used regularization terms for keeping the sparsity of deep neural networks are based on L 1 -norm or L 2 -norm; however, they are not the most reasonable substitutes of L 0 -norm. In this paper, based on the fact that the minimization of a log-sum function is one effective approximation to that of L 0 -norm, the sparse penalty term on the connection weights with the log-sum function is introduced. By embedding the corresponding iterative re-weighted- L 1 minimization algorithm with k -step contrastive divergence, the connections of deep belief networks can be updated in a way of sparse self-adaption. Experiments on two kinds of biomedical datasets which are two typical small sample size datasets with a large number of variables, i.e., brain functional magnetic resonance imaging data and single nucleotide polymorphism data, show that the proposed deep belief networks with self-adaptive sparsity can learn the layer-wise sparse features effectively. And results demonstrate better performances including the identification accuracy and sparsity capability than several typical learning machines.
更多
查看译文
关键词
Deep belief networks, Iterative re-weighted-L-1 minimization algorithm, Self-adaptive sparsity, Contrastive divergence algorithm, Biomedical data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要