## AI helps you reading Science

## AI Insight

AI extracts a summary of this paper

Weibo:

# Robust Low-Rank Tensor Recovery: Models and Algorithms.

SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS, no. 1 (2014): 225-253

EI

Keywords

Abstract

Robust tensor recovery plays an instrumental role in robustifying tensor decompositions for multilinear data analysis against outliers, gross corruptions, and missing values and has a diverse array of applications. In this paper, we study the problem of robust low-rank tensor recovery in a convex optimization framework, drawing upon recen...More

Code:

Data:

Introduction

- The rapid advance in modern computer technology has given rise to the wide presence of multidimensional data.
- Algorithms based on non-convex formulations have been proposed to robustify tensor decompositions against outliers [15, 36] and missing data [2].
- They suffer from the lack of global optimality guarantees

Highlights

- The rapid advance in modern computer technology has given rise to the wide presence of multidimensional data
- Robust low-rank tensor recovery plays an instrumental role in robustifying tensor decompositions, and it is useful in its own right
- We have focused on the computational aspect of this problem and presented two models in a convex optimization framework Higher-order RPCA, one of which naturally leads to a robust version of the Tucker decomposition
- We analyzed the empirical conditions under which exact recovery of a low-rank tensor is possible for the Singleton model of Higher-order RPCA, and we have demonstrated that this model performed the best among the convex models in terms of recovery accuracy when the underlying tensor was low-rank in all modes, whereas the Mixture model performed the best when the tensor was low-rank in only some modes
- If the revealed ranks indicate that the data may be partially low-rank, Higher-order RPCA-S-ADP or Higher-order RPCA-M should be used instead
- Higher-order RPCA-C can be used as a refinement step based on the more precise rank information revealed

Methods

- All the proposed algorithms and the experiments were run in Matlab R2011b on a laptop with a COREi5 2.40GHz CPU and 6G memory.
- The authors report the number of iterations since the per-iteration work of all tensor-based algorithms involve N SVDs and one shrinkage operation.
- RPCA (IALM) and TR-MALM were used as baselines in some experiments.
- The number of iterations for TR-MALM was averaged over the N RPCA instances.
- A description of how the parameters λ1, λ∗, and λ0∗ were set for the algorithms can be found in Appendix E along with a discussion of stopping criteria

Results

- The first observation from these experimental results is that when the Tucker rank was correctly specified, HoRPCA-C yielded significantly better recovery performance in that it achieved near-exact recovery with much fewer observations (20%) and was more robust to data corruption, up to 40%.

Conclusion

- Robust low-rank tensor recovery plays an instrumental role in robustifying tensor decompositions, and it is useful in its own right.
- The authors have focused on the computational aspect of this problem and presented two models in a convex optimization framework HoRPCA, one of which naturally leads to a robust version of the Tucker decomposition.
- Both the constrained and the Lagrangian formulations of the problem were considered, and the authors proposed efficient optimization algorithms with global convergence guarantees for each case.
- HoRPCA-C can be used as a refinement step based on the more precise rank information revealed

Summary

## Introduction:

The rapid advance in modern computer technology has given rise to the wide presence of multidimensional data.- Algorithms based on non-convex formulations have been proposed to robustify tensor decompositions against outliers [15, 36] and missing data [2].
- They suffer from the lack of global optimality guarantees
## Methods:

All the proposed algorithms and the experiments were run in Matlab R2011b on a laptop with a COREi5 2.40GHz CPU and 6G memory.- The authors report the number of iterations since the per-iteration work of all tensor-based algorithms involve N SVDs and one shrinkage operation.
- RPCA (IALM) and TR-MALM were used as baselines in some experiments.
- The number of iterations for TR-MALM was averaged over the N RPCA instances.
- A description of how the parameters λ1, λ∗, and λ0∗ were set for the algorithms can be found in Appendix E along with a discussion of stopping criteria
## Results:

The first observation from these experimental results is that when the Tucker rank was correctly specified, HoRPCA-C yielded significantly better recovery performance in that it achieved near-exact recovery with much fewer observations (20%) and was more robust to data corruption, up to 40%.## Conclusion:

Robust low-rank tensor recovery plays an instrumental role in robustifying tensor decompositions, and it is useful in its own right.- The authors have focused on the computational aspect of this problem and presented two models in a convex optimization framework HoRPCA, one of which naturally leads to a robust version of the Tucker decomposition.
- Both the constrained and the Lagrangian formulations of the problem were considered, and the authors proposed efficient optimization algorithms with global convergence guarantees for each case.
- HoRPCA-C can be used as a refinement step based on the more precise rank information revealed

- Table1: Reconstruction results for the amino acids data

Related work

- Several methods have proposed for solving the RPCA problem, including the Iterative Thresholding algorithm [50], the Accelerated Proximal Gradient (APG/FISTA) algorithm with continuation [31] for the Lagrangian formulation of (1.1), a gradient algorithm applied to the dual problem of (1.1), and the Inexact Augmented Lagrangian method (IALM) in [30]. It is reported in [30] that IALM was faster than APG on simulated data sets.

For the unconstrained formulation of Tensor Completion with the Singleton model, min X λ∗ X(i) AΩ(X) − BΩ) 2, i=1 (2.19)

[20] and [46] both proposed an ADAL algorithm based on applying variable-splitting on X. For the Mixture model version of (2.19), [46] also proposed an ADAL method applied to the dual problem.

There have been some attempts to tackle the HoRPCA problem (2.2) with applications in computer vision and image processing. The RSTD algorithm proposed in [29] uses a vanilla Block Coordinate Descent (BCD) approach to solve the unconstrained problem min

Funding

- This research was supported in part by NSF Grant DMS-1016571, ONR Grant N00014-08-1-1118, and DOE Grant DE-FG02-08ER25856

Reference

- E. Abdallah, A. Hamza, and P. Bhattacharya. Mpeg video watermarking using tensor singular value decomposition. Image Analysis and Recognition, pages 772–783, 2007.
- E. Acar, D. Dunlavy, T. Kolda, and M. Mørup. Scalable tensor factorizations with missing data. Siam Datamining 2010 (SDM 2010), 2010.
- B. Bader and T. Kolda. Efficient matlab computations with sparse and factored tensors. SIAM Journal on Scientific Computing, 30(1):205, 2009.
- D. Baunsgaard. Factors affecting three-way modeling (PARAFAC) of fluorescence landscapes. Frederiksberg, denmark, Royal Veterinary and Agricultural University, Department of Dairy and Food Technology, 1999.
- A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009.
- A. Beck and L. Tetruashvili. On the convergence of block coordinate descent type methods. Technion, Israel Institute of Technology, Haifa, Israel, Tech. Rep, 2011.
- D. Bertsekas and J. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. Prentice-Hall, Inc., 1989.
- S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Machine Learning, 3(1):1–123, 2010.
- E. CANDES, Y. MA, J. WRIGHT, et al. Robust principal component analysis? Journal of the Association for Computing Machinery, 58(3), 2011.
- J. Carroll and J. Chang. Analysis of individual differences in multidimensional scaling via an n-way generalization of eckart-young decomposition. Psychometrika, 35(3):283–319, 1970.
- L. De Lathauwer, B. De Moor, and J. Vandewalle. A multilinear singular value decomposition. SIAM Journal on Matrix Analysis and Applications, 21(4):1253–1278, 2000.
- L. De Lathauwer, B. De Moor, and J. Vandewalle. On the best rank-1 and rank-(r 1, r 2,..., rn) approximation of higher-order tensors. SIAM J. Matrix Analysis and Applications, 21(4):1324–1342, 2000.
- C. Eckart and G. Young. The approximation of one matrix by another of lower rank. Psychometrika, 1(3):211– 218, 1936.
- J. Eckstein and D. Bertsekas. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Mathematical Programming, 55(1):293–318, 1992.
- S. Engelen, S. Frosch, and B. Jørgensen. A fully robust parafac method for analyzing fluorescence data. Journal of chemometrics, 23(3):124–131, 2009.
- S. Engelen and M. Hubert. Detecting outlying samples in a parafac model. Analytica Chimica Acta, 2011.
- S. Engelen, S. Møller, and M. Hubert. Automatically identifying scatter in fluorescence data using robust techniques. Chemometrics and intelligent laboratory systems, 86(1):35–51, 2007.
- T. Franz, A. Schultz, S. Sizov, and S. Staab. Triplerank: Ranking semantic web data by tensor decomposition. The Semantic Web-ISWC 2009, pages 213–228, 2009.
- D. Gabay and B. Mercier. A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Computers & Mathematics with Applications, 2(1):17–40, 1976.
- S. Gandy, B. Recht, and I. Yamada. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Problems, 27:025010, 2011.
- R. Glowinski and A. Marroco. Sur l’approximation, par elements finis d’ordre un, et la resolution, par penalisation-dualite d’une classe de problemes de dirichlet non lineares. Rev. Francaise d’Automat. Inf. Recherche Operationelle, (9):41–76, 1975.
- R. Harshman. Foundations of the parafac procedure: models and conditions for an” explanatory” multimodal factor analysis. 1970.
- J. Hastad. Tensor rank is np-complete. Journal of Algorithms, 11(4):644–654, 1990.
- K. Hayashi, T. Takenouchi, T. Shibata, Y. Kamiya, D. Kato, K. Kunieda, K. Yamada, and K. Ikeda. Exponential family tensor factorization for missing-values prediction and anomaly detection. In Data Mining (ICDM), 2010 IEEE 10th International Conference on, pages 216–225. IEEE, 2010.
- H. Huang and C. Ding. Robust tensor factorization using r¡ inf¿ 1¡/inf¿ norm. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8. IEEE, 2008.
- T. Kolda and B. Bader. Tensor decompositions and applications. SIAM review, 51(3), 2009.
- R. Larsen. Propack: A software package for the symmetric eigenvalue problem and singular value problems on lanczos and lanczos bidiagonalization with partial reorthogonalization, sccm. 2004.
- X. Li. Compressed sensing and matrix completion with constant proportion of corruptions. Arxiv Preprint arXiv:1104.1041, 2011.
- Y. Li, J. Yan, Y. Zhou, and J. Yang. Optimum subspace learning and error correction for tensors. Computer Vision–ECCV 2010, pages 790–803, 2010.
- Z. Lin, M. Chen, L. Wu, and Y. Ma. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. Arxiv Preprint arXiv:1009.5055, 2010.
- Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen, and Y. Ma. Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix. CAMSAP, 2009.
- J. Liu, P. Musialski, P. Wonka, and J. Ye. Tensor completion for estimating missing values in visual data. In Computer Vision, 2009 IEEE 12th International Conference on, pages 2114–2121. IEEE, 2009.
- S. Ma, D. Goldfarb, and L. Chen. Fixed point and bregman iterative methods for matrix rank minimization. Mathematical Programming, pages 1–33, 2009.
- S. Ma, L. Xue, and H. Zou. Alternating direction methods for latent variable gaussian graphical model selection. Technical report, University of Minnesota, 2012.
- Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM Journal on Optimization, 22(2):341–362, 2012.
- V. Pravdova, F. Estienne, B. Walczak, and D. Massart. A robust version of the tucker3 model. Chemometrics and Intelligent Laboratory Systems, 59(1):75–88, 2001.
- Z. Qin and D. Goldfarb. Structured sparsity via alternating direction methods. Journal of Machine Learning Research, 13:1373–1406, 2012.
- P. Richtarik and M. Takac. Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Mathematical Programming, pages 1–38, 2012.
- J. Riu and R. Bro. Jack-knife technique for outlier detection and estimation of standard errors in parafac models. Chemometrics and Intelligent Laboratory Systems, 65(1):35–49, 2003.
- P. Rousseeuw, M. Debruyne, S. Engelen, and M. Hubert. Robustness and outlier detection in chemometrics. Critical reviews in analytical chemistry, 36(3-4):221–242, 2006.
- Y. Shen, Z. Wen, and Y. Zhang. Augmented lagrangian alternating direction method for matrix separation based on low-rank factorization. TR11-02, Rice University, 2011.
- J. Sun, H. Zeng, H. Liu, Y. Lu, and Z. Chen. Cubesvd: a novel approach to personalized web search. In Proceedings of the 14th International Conference on World Wide Web, pages 382–390. ACM, 2005.
- H. Tan, B. Cheng, J. Feng, G. Feng, and Y. Zhang. Tensor recovery via multi-linear augmented lagrange multiplier method. In Proceedings of ICIG 2011, pages 141–146. IEEE, 2011.
- K. Toh and S. Yun. An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pacific Journal of Optimization, 6(615-640):15, 2010.
- G. Tomasi and R. Bro. A comparison of algorithms for fitting the parafac model. Computational Statistics & Data Analysis, 50(7):1700–1734, 2006.
- R. Tomioka, K. Hayashi, and H. Kashima. Estimation of low-rank tensors via convex optimization. Arxiv Preprint arXiv:1010.0789, 2010.
- R. Tomioka, T. Suzuki, K. Hayashi, and H. Kashima. Statistical performance of convex tensor decomposition. Advances in Neural Information Processing Systems (NIPS), page 137, 2011.
- L. Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3):279–311, 1966.
- M. Vasilescu and D. Terzopoulos. Multilinear subspace analysis of image ensembles. In Proceedings of CVPR, volume 2, pages II–93. IEEE, 2003.
- J. Wright, A. Ganesh, S. Rao, and Y. Ma. Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. submitted to Journal of the ACM, 2009.
- J. Yang and Y. Zhang. Alternating direction algorithms for l1-problems in compressive sensing. SIAM Journal on Scientific Computing, 33(1):250–278, 2011.
- M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49–67, 2006.

Tags

Comments

数据免责声明

页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果，我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问，可以通过电子邮件方式联系我们：report@aminer.cn