Atom Decomposition Based Subgradient Descent for matrix classification.

Neurocomputing(2016)

引用 1|浏览24
暂无评分
摘要
Matrices are appropriate for representing a wealth of data with complex structures such as images and electroencephalogram data (EEG). To learn a classifier dealing with these matrix data, the structure information of the feature matrix is useful. In this paper, we focus on the regularized matrix classifiers whose input samples and weight parameters are both in the form of a matrix. Some existing approaches assume that the weight matrix has a low-rank structure and then utilize the popular nuclear norm of the weight matrix as a regularization term. However, the optimization methods for these matrix classifiers often involve numbers of expensive singular value decomposition (SVD) operations, which prevents scaling beyond moderate matrix sizes. To reduce the time complexity, we propose a novel learning algorithm called Atom Decomposition Based Subgradient Descent (ADBSD), which solves the optimization problem for the matrix classifier whose objective function is the combination of the Frobenius matrix norm and nuclear norm of the weight matrix along with the hinge loss function. Our ADBSD is an iterative scheme which selects the most informative rank-one matrices from the subgradient of the objective function in each iteration. We consider using the atom decomposition based methods to minimize nuclear norm because they mainly rely on the computation of top singular vector pair which leads to great advantages on efficiency. We empirically evaluate the performance of the proposed algorithm ADBSD on both synthetic and real-world datasets. Results show that our approach is more efficient and robust than the state-of-the-art methods.
更多
查看译文
关键词
Subgradient Descent,Nuclear norm minimization,Atom decomposition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要