Tight Query Complexity Lower Bounds for PCA via Finite Sample Deformed Wigner Law

STOC '18: Symposium on Theory of Computing Los Angeles CA USA June, 2018๏ผˆ2018๏ผ‰

ๅผ•็”จ 42|ๆต่งˆ67
ๆš‚ๆ— ่ฏ„ๅˆ†
ๆ‘˜่ฆ
We prove a query complexity lower bound for approximating the top r dimensional eigenspace of a matrix. We consider an oracle model where, given a symmetric matrix ๐Œโˆˆโ„^d ร— d, an algorithm ๐– ๐—…๐—€ is allowed to make ๐–ณ exact queries of the form ๐—^(i) = ๐Œ๐—^(i) for i in {1,...,๐–ณ}, where ๐—^(i) is drawn from a distribution which depends arbitrarily on the past queries and measurements {๐—^(j),๐—^(i)}_1 โ‰ค j โ‰ค i-1. We show that for every ๐š๐šŠ๐š™โˆˆ (0,1/2], there exists a distribution over matrices ๐Œ for which 1) gap_r(๐Œ) = ฮฉ(๐š๐šŠ๐š™) (where gap_r(๐Œ) is the normalized gap between the r and r+1-st largest-magnitude eigenvector of ๐Œ), and 2) any algorithm ๐– ๐—…๐—€ which takes fewer than constร—r log d/โˆš(๐š๐šŠ๐š™) queries fails (with overwhelming probability) to identity a matrix ๐–ตโˆˆโ„^d ร— r with orthonormal columns for which โŸจ๐–ต, ๐Œ๐–ตโŸฉโ‰ฅ (1 - constร—๐š๐šŠ๐š™)โˆ‘_i=1^r ฮป_i(๐Œ). Our bound requires only that d is a small polynomial in 1/๐š๐šŠ๐š™ and r, and matches the upper bounds of Musco and Musco '15. Moreover, it establishes a strict separation between convex optimization and randomized, "strict-saddle" non-convex optimization of which PCA is a canonical example: in the former, first-order methods can have dimension-free iteration complexity, whereas in PCA, the iteration complexity of gradient-based methods must necessarily grow with the dimension.
ๆ›ดๅคš
ๆŸฅ็œ‹่ฏ‘ๆ–‡
ๅ…ณ้”ฎ่ฏ
Lower Bounds,Query Complexity,PCA,Optimization,Random Matrix Theory
AI ็†่งฃ่ฎบๆ–‡
ๆบฏๆบๆ ‘
ๆ ทไพ‹
็”Ÿๆˆๆบฏๆบๆ ‘๏ผŒ็ ”็ฉถ่ฎบๆ–‡ๅ‘ๅฑ•่„‰็ปœ
Chat Paper
ๆญฃๅœจ็”Ÿๆˆ่ฎบๆ–‡ๆ‘˜่ฆ