A novel dictionary learning method for sparse representation with nonconvex regularizations

Neurocomputing(2020)

引用 11|浏览10
暂无评分
摘要
In dictionary learning, sparse regularization is used to promote sparsity and has played a major role in the developing of dictionary learning algorithms. ℓ1-norm is of the most popular sparse regularization due to its convexity and the related tractable convex optimization problems. However, ℓ1-norm leads to biased solutions and provides inferior performance on certain applications compared with nonconvex sparse regularizations. In this work, we propose a generalized minimax-concave (GMC) sparse regularization, which is nonconvex, to promote sparsity to design dictionary learning model. Applying the alternate optimization scheme, we use the forward–backward splitting (FBS) algorithm to solve the sparse coding problem. As the improvement, we incorporate Nesterov’s acceleration technique and adaptive threshold scheme into the FBS algorithm to improve the convergence efficiency and performance. In the dictionary update step, we apply the difference of convex functions (DC) programming and the DC algorithm (DCA) to address the dictionary update. Two dictionary update algorithms are designed; one updates the dictionary atoms one by one, and the other one updates the dictionary atoms simultaneously. The presented dictionary learning algorithms perform robustly in dictionary recovery. Numerical experiments are designed to verify the performance of proposed algorithms and to compare with the state-of-the-art algorithms.
更多
查看译文
关键词
Dictionary learning,Nonconvex,GMC regularization,DC programming and DCA,Forward-backward splitting algorithm
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要