High Dimensional Linear Regression using Lattice Basis Reduction.
neural information processing systems(2018)
摘要
We consider a high dimensional linear regression problem where the goal is to efficiently recover an unknown vector beta^* from n noisy linear observations Y=Xbeta^+W mathbb{R}^n, for known X mathbb{R}^{n times p} and unknown W mathbb{R}^n. Unlike most of the literature on this model we make no sparsity assumption on beta^. Instead we adopt a regularization based on assuming that the underlying vectors beta^* have rational entries with the same denominator Q mathbb{Z}_{u003e0}. We call this Q-rationality assumption. We propose a new polynomial-time algorithm for this task which is based on the seminal Lenstra-Lenstra-Lovasz (LLL) lattice basis reduction algorithm. We establish that under the Q-rationality assumption, our algorithm recovers exactly the vector beta^* for a large class of distributions for the iid entries of X and non-zero noise W. We prove that it is successful under small noise, even when the learner has access to only one observation (n=1). Furthermore, we prove that in the case of the Gaussian white noise for W, n=o(p/log p) and Q sufficiently large, our algorithm tolerates a nearly optimal information-theoretic level of the noise.
更多查看译文
关键词
polynomial-time algorithm,linear regression,high dimensional,sufficiently large,gaussian white noise,lattice basis reduction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络