Sub-Gaussian Matrices on Sets: Optimal Tail Dependence and Applications

COMMUNICATIONS ON PURE AND APPLIED MATHEMATICS(2022)

引用 13|浏览7
暂无评分
摘要
Random linear mappings are widely used in modern signal processing, compressed sensing, and machine learning. These mappings may be used to embed the data into a significantly lower dimension while at the same time preserving useful information. This is done by approximately preserving the distances between data points, which are assumed to belong to Double-struck capital Rn. Thus, the performance of these mappings is usually captured by how close they are to an isometry on the data. Gaussian linear mappings have been the object of much study, while the sub-Gaussian settings is not yet fully understood. In the latter case, the performance depends on the sub-Gaussian norm of the rows. In many applications, e.g., compressed sensing, this norm may be large, or even growing with dimension, and thus it is important to characterize this dependence. We study when a sub-Gaussian matrix can become a near isometry on a set, show that previous best-known dependence on the sub-Gaussian norm was suboptimal, and present the optimal dependence. Our result not only answers a remaining question posed by Liaw, Mehrabian, Plan, and Vershynin in 2017, but also generalizes their work. We also develop a new Bernstein-type inequality for subexponential random variables, and a new Hanson-Wright inequality for quadratic forms of sub-Gaussian random variables, in both cases improving the bounds in the sub-Gaussian regime under moment constraints. Finally, we illustrate popular applications such as Johnson-Lindenstrauss embeddings, null space property for 0-1 matrices, randomized sketches, and blind demodulation, whose theoretical guarantees can be improved by our results (in the sub-Gaussian case). (c) 2021 Wiley Periodicals LLC.
更多
查看译文
关键词
optimal tail dependence,sets,sub-gaussian
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要