## AI帮你理解科学

## AI 精读

AI抽取本论文的概要总结

微博一下：

# Robust One-Bit Recovery via ReLU Generative Networks: Near Optimal Statistical Rate and Global Landscape Analysis

ICML, pp.7857-7866, (2020)

EI

关键词

摘要

We study the robust one-bit compressed sensing problem whose goal is to design an algorithm that faithfully recovers any sparse target vector $\theta_0\in\mathbb{R}^d$ uniformly $m$ quantized noisy measurements. Under the assumption that the measurements are sub-Gaussian random vectors, to recover any $k$-sparse $\theta_0$ ($k\ll d$) un...更多

代码：

数据：

简介

- Quantized compressed sensing investigates how to design the sensing procedure, quantizer and reconstruction algorithm so as to recover a high dimensional vector from a limited number of

§Department of Operations Research and Financial Engineering, Princeton University.

Email: quantized measurements. - Previous theoretical successes on this problem (e.g., Jacques et al (2013); Plan and Vershynin (2013)) mainly rely on two key assumptions: (1) The Gaussianity of the sensing vector ai, (2) The sparsity of the vector θ0 on a given basis.
- The practical significance of these assumptions are rather limited in the sense that it is difficult to generate Gaussian vectors and high dimensional targets in practice are often distributed near a low-dimensional manifold rather than sparse on some given basis.
- The goal of this work is to make steps towards addressing these two limitations

重点内容

- Quantized compressed sensing investigates how to design the sensing procedure, quantizer and reconstruction algorithm so as to recover a high dimensional vector from a limited number of

§Department of Operations Research and Financial Engineering, Princeton University.

Email: quantized measurements - We introduce a new framework for robust dithered one-bit compressed sensing where the structure of target vector θ0 is represented via a ReLU network G : Rk → Rd, i.e., θ0 = G(x0) for some x0 ∈ Rk and k ≪ d
- Building upon the previous methods guaranteeing uniform recovery, we show that solving the empirical risk minimization and approximate the true representation x0 ∈ Rk can be tractable under further assumptions on ReLU networks
- We establish our main theorem regarding statistical recovery guarantee of G(x0) and the associated information-theoretic lower bound in Sections §3.1 and §3.2, respectively
- We introduce a joint statistical and computational analysis of a proposed unconstrained empirical risk minimization method

结果

- The authors establish the main theorem regarding statistical recovery guarantee of G(x0) and the associated information-theoretic lower bound in Sections §3.1 and §3.2, respectively.
- The authors' statistical guarantee relies on the following assumption on the measurement vector and noise: Assumption 3.1.
- The measurement vector a ∈ Rd is mean 0, isotropic and sub-exponential.
- The noise ξ is a sub-exponential random variable.
- Under this assumption, the authors have the following main statistical performance theorem: Theorem 3.2.

结论

**Discussion of**

Two

Cases: The authors take the discussion from two aspects: < 2−n 2 2 and 2−n εwdc.

Case 1: εwdc < 2−n

This means x 0 is not close to 0.- Cases: The authors take the discussion from two aspects: < 2−n 2 2 and 2−n εwdc.
- Case 1: εwdc < 2−n.
- This means x 0 is not close to 0.
- The authors introduce a joint statistical and computational analysis of a proposed unconstrained ERM method.
- The authors show that such a method give an improved statistical rate compared to that of convex methods in sparsity based frameworks, and computationally has no spurious stationary points

总结

## Introduction:

Quantized compressed sensing investigates how to design the sensing procedure, quantizer and reconstruction algorithm so as to recover a high dimensional vector from a limited number of

§Department of Operations Research and Financial Engineering, Princeton University.

Email: quantized measurements.- Previous theoretical successes on this problem (e.g., Jacques et al (2013); Plan and Vershynin (2013)) mainly rely on two key assumptions: (1) The Gaussianity of the sensing vector ai, (2) The sparsity of the vector θ0 on a given basis.
- The practical significance of these assumptions are rather limited in the sense that it is difficult to generate Gaussian vectors and high dimensional targets in practice are often distributed near a low-dimensional manifold rather than sparse on some given basis.
- The goal of this work is to make steps towards addressing these two limitations
## Objectives:

The goal of this work is to make steps towards addressing these two limitations.## Results:

The authors establish the main theorem regarding statistical recovery guarantee of G(x0) and the associated information-theoretic lower bound in Sections §3.1 and §3.2, respectively.- The authors' statistical guarantee relies on the following assumption on the measurement vector and noise: Assumption 3.1.
- The measurement vector a ∈ Rd is mean 0, isotropic and sub-exponential.
- The noise ξ is a sub-exponential random variable.
- Under this assumption, the authors have the following main statistical performance theorem: Theorem 3.2.
## Conclusion:

**Discussion of**

Two

Cases: The authors take the discussion from two aspects: < 2−n 2 2 and 2−n εwdc.

Case 1: εwdc < 2−n

This means x 0 is not close to 0.- Cases: The authors take the discussion from two aspects: < 2−n 2 2 and 2−n εwdc.
- Case 1: εwdc < 2−n.
- This means x 0 is not close to 0.
- The authors introduce a joint statistical and computational analysis of a proposed unconstrained ERM method.
- The authors show that such a method give an improved statistical rate compared to that of convex methods in sparsity based frameworks, and computationally has no spurious stationary points

引用论文

- Ai, A., Lapanowski, A., Plan, Y. and Vershynin, R. (2014). One-bit compressed sensing with nongaussian measurements. Linear Algebra and its Applications, 441 222–239.
- Angluin, D. and Valiant, L. G. (1979). Fast probabilistic algorithms for Hamiltonian circuits and matchings. Journal of Computer and system Sciences, 18 155–193.
- Arjovsky, M., Chintala, S. and Bottou, L. (2017). arXiv:1701.07875.
- Arora, S., Liang, Y. and Ma, T. (2015). Why are deep nets reversible: A simple theory, with implications for training. arXiv preprint arXiv:1511.05653.
- Aubin, B., Loureiro, B., Maillard, A., Krzakala, F. and Zdeborová, L. (2019). The spiked matrix model with generative priors. arXiv preprint arXiv:1905.12385.
- Bora, A., Jalal, A., Price, E. and Dimakis, A. G. (2017). Compressed sensing using generative models. arXiv preprint arXiv:1703.03208.
- Dirksen, S. and Mendelson, S. (2018a). Non-gaussian hyperplane tessellations and robust one-bit compressed sensing. arXiv preprint arXiv:1805.09409.
- Dirksen, S. and Mendelson, S. (2018b). Robust one-bit compressed sensing with partial circulant matrices. arXiv preprint arXiv:1812.06719.
- Gilbert, A. C., Zhang, Y., Lee, K., Zhang, Y. and Lee, H. (2017). Towards understanding the invertibility of convolutional neural networks. arXiv preprint arXiv:1705.08664.
- Goldstein, L., Minsker, S. and Wei, X. (2018). Structured signal recovery from non-linear and heavy-tailed measurements. IEEE Transactions on Information Theory, 64 5513–5530.
- Goldstein, L. and Wei, X. (2018). Non-Gaussian observations in nonlinear compressed sensing via Stein discrepancies. Information and Inference: A Journal of the IMA, 8 125–159. https://doi.org/10.1093/imaiai/iay006
- Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. and Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems.
- Hammernik, K., Klatzer, T., Kobler, E., Recht, M. P., Sodickson, D. K., Pock, T. and Knoll, F. (2018). Learning a variational network for reconstruction of accelerated mri data. Magnetic resonance in medicine, 79 3055–3071.
- Hand, P. and Joshi, B. (2019). Global guarantees for blind demodulation with generative priors. arXiv preprint arXiv:1905.12576.
- Hand, P., Leong, O. and Voroninski, V. (2018). Phase retrieval under a generative prior. In Advances in Neural Information Processing Systems.
- Hand, P. and Voroninski, V. (2018). Global guarantees for enforcing deep generative priors by empirical risk. In Conference On Learning Theory.
- Huang, W., Hand, P., Heckel, R. and Voroninski, V. (2018). A provably convergent scheme for compressive sensing under random generative priors. arXiv preprint arXiv:1812.04176.
- Jacques, L., Laska, J. N., Boufounos, P. T. and Baraniuk, R. G. (2013). Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors. IEEE Transactions on Information Theory, 59 2082–2102.
- Kamath, A., Karmalkar, S. and Price, E. (2019). Lower bounds for compressed sensing with generative models.
- Kingma, D. P. and Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
- Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z. et al. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE.
- Lei, N., Luo, Z., Yau, S.-T. and Gu, D. X. (2018). Geometric understanding of deep learning. arXiv preprint arXiv:1805.10451.
- Liu, Z. and Scarlett, J. (2019). Information-theoretic lower bounds for compressive sensing with generative models. arXiv preprint arXiv:1908.10744.
- Plan, Y. and Vershynin, R. (2013). Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach. IEEE Transactions on Information Theory, 59 482–494.
- Plan, Y. and Vershynin, R. (2014). Dimension reduction by random hyperplane tessellations. Discrete & Computational Geometry, 51 438–461.
- Plan, Y., Vershynin, R. and Yudovina, E. (2016). High-dimensional estimation with geometric constraints. Information and Inference: A Journal of the IMA, 6 1–40.
- Rezende, D. J., Mohamed, S. and Wierstra, D. (2014). Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082.
- Sønderby, C. K., Caballero, J., Theis, L., Shi, W. and Huszár, F. (2016). Amortised map inference for image super-resolution. arXiv preprint arXiv:1610.04490.
- Thrampoulidis, C. and Rawat, A. S. (2018). The generalized lasso for sub-gaussian measurements with dithered quantization. arXiv preprint arXiv:1807.06976.
- Wei, X., Yang, Z. and Wang, Z. (2019). On the statistical rate of nonlinear recovery in generative models with heavy-tailed data. In International Conference on Machine Learning.
- Wellner, J. et al. (2013). Weak convergence and empirical processes: with applications to statistics. Springer Science & Business Media.
- Winder, R. (1966). Partitions of n-space by hyperplanes. SIAM Journal on Applied Mathematics, 14 811–818.
- Xu, C. and Jacques, L. (2018). Quantized compressive sensing with rip matrices: The benefit of dithering. arXiv preprint arXiv:1801.05870.
- Yang, G., Yu, S., Dong, H., Slabaugh, G., Dragotti, P. L., Ye, X., Liu, F., Arridge, S., Keegan, J., Guo, Y. et al. (2018). Dagan: Deep de-aliasing generative adversarial networks for fast compressed sensing mri reconstruction. IEEE transactions on medical imaging, 37 1310–1321.
- Yeh, R. A., Chen, C., Yian Lim, T., Schwing, A. G., Hasegawa-Johnson, M. and Do, M. N. (2017). Semantic image inpainting with deep generative models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
- Zhang, L., Yi, J. and Jin, R. (2014). Efficient algorithms for robust one-bit compressive sensing. In International Conference on Machine Learning.

标签

评论

数据免责声明

页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果，我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问，可以通过电子邮件方式联系我们：report@aminer.cn