Regularized Gradient Descent: A Non-Convex Recipe For Fast Joint Blind Deconvolution And Demixing

INFORMATION AND INFERENCE-A JOURNAL OF THE IMA(2019)

引用 50|浏览11
暂无评分
摘要
We study the question of extracting a sequence of functions from observing only the sum of their convolutions, i.e. from . While convex optimization techniques are able to solve this joint blind deconvolution-demixing problem provably and robustly under certain conditions, for medium-size or large-size problems we need computationally faster methods without sacrificing the benefits of mathematical rigor that come with convex methods. In this paper we present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. Our two-step algorithm converges to the global minimum linearly and is also robust in the presence of additive noise. While the derived performance bounds are suboptimal in terms of the information-theoretic limit, numerical simulations show remarkable performance even if the number of measurements is close to the number of degrees of freedom. We discuss an application of the proposed framework in wireless communications in connection with the Internet-of-Things.
更多
查看译文
关键词
Blind deconvolution, blind demixing, non-convex optimization, random matrix, signal processing, wireless communication
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要