Input Sparsity and Hardness for Robust Subspace Approximation

IEEE Symposium on Foundations of Computer Science(2015)

引用 82|浏览81
暂无评分
摘要
In the subspace approximation problem, we seek a k-dimensional subspace F of Rd that minimizes the sum of p-th powers of Euclidean distances to a given set of n points a1, an ε Rd, for p ≥ 1. More generally than minimizing Σi dist(ai, F)p, we may wish to minimize Σi M(dist(ai, F)) for some loss function M(), for example, M-Estimators, which include the Huber and Tukey loss functions. Such subspaces provide alternatives to the singular value decomposition (SVD), which is the p=2 case, finding such an F that minimizes the sum of squares of distances. For p ε [1, 2), and for typical M-Estimators, the minimizing F gives a solution that is more robust to outliers than that provided by the SVD. We give several algorithmic results for these robust subspace approximation problems. We state our results as follows, thinking of the n points as forming an n × d matrix A, and letting nnz(A)denote the number of non-zero entries of A. Our results hold for pε [1, 2). We use poly(n) to denote nO(1) as n →∞. 1) For minimizing Σi dist(ai, F)p, we give an algorithm running in O(nnz(A) + (n+d) poly(k/ε) + exp(poly(k/ε))) time which outputs a k-dimensional subspace F whose cost is at most a (1+ε)-factor larger than the optimum. 2) We show that the problem of minimizing Σi dist(ai, F)p is NP-hard, even to output a (1+1/poly(d))-approximation. This extends work of Deshpande et al. (SODA, 2011) which could only show NP-hardness or UGC-hardness for p > 2, their proofs critically rely on p > 2. Our work resolves an open question of [Kannan Vempala, NOW, 2009]. Thus, there cannot be an algorithm running in time polynomial in k and 1/ε unless P = NP. Together with prior work, this implies that the problem is NP-hard for all p ≠ 2. For loss functions for a wide class of M-Estimators, we give a problem-size reduction: for a parameter K=(log n){O(log k), our reduction takes O(nnz(A)log n + (n+d)poly(K/ε)) time to reduce the problem to a constrained version involving matrices whose dimensions are poly(Kε -- 1 log n). We also give bicriteria solutions. Our techniques lead to the first O(nnz(A) + poly(d/ε)) time algorithms for (1+ε)-approximate regression for a wide class of convex M-Estimators. This improves prior results [1], which were (1+ε)-approximation for Huber regression only, and O(1)-approximation for a general class of M-Estimators.
更多
查看译文
关键词
approximation,numerical linear algebra,regression,robust statistics,sampling,sketching
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要