Batch Sparse Recovery, or How to Leverage the Average Sparsity.

arXiv: Data Structures and Algorithms(2018)

引用 23|浏览108
暂无评分
摘要
introduce a emph{batch} version of sparse recovery, where the goal is to report a sequence of vectors $A_1u0027,ldots,A_mu0027 in mathbb{R}^n$ that estimate unknown signals $A_1,ldots,A_m in mathbb{R}^n$ using a few linear measurements, each involving exactly one signal vector, under an assumption of emph{average sparsity}. More precisely, we want to have newline $(1) ;;; sum_{j in [m]}{|A_j- A_ju0027|_p^p} le C cdot min Big{ sum_{j in [m]}{|A_j - A_j^*|_p^p} Big}$ predetermined constants $C ge 1$ and $p$, where the minimum is over all $A_1^*,ldots,A_m^*inmathbb{R}^n$ that are $k$-sparse on average. assume $k$ is given as input, and ask for the minimal number of measurements required to satisfy $(1)$. The special case $m=1$ is known as stable sparse recovery and has been studied extensively. We resolve the question for $p =1$ up to polylogarithmic factors, by presenting a randomized adaptive scheme that performs $tilde{O}(km)$ measurements and with high probability has output satisfying $(1)$, for arbitrarily small $C u003e 1$. Finally, we show that adaptivity is necessary for every non-trivial scheme.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要