Superfast Low-Rank Approximation and Least Squares Regression.

arXiv: Numerical Analysis(2018)

引用 23|浏览23
暂无评分
摘要
Low Rank Approximation is among most fundamental subjects of numerical linear algebra having important applications to various areas of modern computing and %they range from machine learning theory and %neural networks to data mining and analysis. The known algorithms compute such approximations by using more flops than the input matrix has entries, but we prove that much fewer flops than entries are sufficient in the case of the average input ("flop" stands for "floating point arithmetic operation"). We prove this twice -- for the solutions by means of two distinct algorithms, and we analyze them by applying two different approaches. Our analysis of both algorithms is quite involved, but we devise them mostly by simplifying, combining, and ameliorating the known techniques, although we propose some technical novelties for further enhancing the performance of the popular Cross-Approximation Algorithms. They are highly efficient empirically, and we prove that they are efficient for the average input. We specify some narrow classes of hard inputs for which the presented algorithms fail with high probability even when we randomize them, but we narrow such classes further by means of preprocessing with new sparse and structured multipliers. The average complexity estimates do not cover many realistic input classes, but our formal analysis is in good accordance with the results of our tests applied to benchmark inputs from discretized PDEs and Integral quations and to random inputs. Our work should already be of practical value but also leads to research challenges. At the end we list some of them, propose two novel extensions of our progress -- to the acceleration of the Fast Multipole Method and Conjugate Gradient algorithms, and explore and slightly extend the recent techniques of Osinsky, which enhance the output accuracy of CUR Approximation.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要