Span Recovery for Deep Neural Networks with Applications to Input Obfuscation

ICLR, 2020.

Cited by: 0|Bibtex|Views25|Links
EI
Keywords:
Span recovery low rank neural networks adversarial attack
Weibo:
We provably show that span recovery for deep neural networks with high precision can be efficiently accomplished with poly(n) function evaluations, even when the networks have poly(n) layers and the output of the network is a scalar in some finite set

Abstract:

The tremendous success of deep neural networks has motivated the need to better understand the fundamental properties of these networks, but many of the theoretical results proposed have only been for shallow networks. In this paper, we study an important primitive for understanding the meaningful input space of a deep network: span recov...More
Introduction
  • Consider the general framework in which we are given an unknown function f : Rn → R, and we want to learn properties about this function given only access to the value f (x) for different inputs x.
  • Hardt & Woodruff (2013) gives an adaptive approximate span recovery algorithm with poly(n) samples under the assumption that the function g satisfies a norm-preserving condition, which is restrictive and need not hold for the deep neural networks we consider here.
Highlights
  • Consider the general framework in which we are given an unknown function f : Rn → R, and we want to learn properties about this function given only access to the value f (x) for different inputs x
  • We provably show that span recovery for deep neural networks with high precision can be efficiently accomplished with poly(n) function evaluations, even when the networks have poly(n) layers and the output of the network is a scalar in some finite set
  • We use a volume bounding technique to show that a ReLU network has sufficiently large piece-wise linear sections and that gradient information can be derived from function evaluations
  • Novel combinatorial analysis of the sign patterns of the ReLU network along with facts in polynomial algebra, we show that the gradient matrix has sufficient rank to allow for partial span recovery
  • We demonstrate the utility of this attack on MNIST data, where we use span recovery to generate noisy images that are classified by the network as normal digits with high confidence
  • We will demonstrate that even for such functions with a binary threshold placed at the end, giving us minimal information about the network, we can still achieve full span recovery of the weight matrix A, albeit with the cost of an -
Results
  • For deep networks M (x) : Rn → R with ReLU activation functions, we prove that we can recover a subspace V ⊂ Span(A) of dimension at least k/2 with polynomially many non-adaptive queries.1 First, we use a volume bounding technique to show that a ReLU network has sufficiently large piece-wise linear sections and that gradient information can be derived from function evaluations.
  • Novel combinatorial analysis of the sign patterns of the ReLU network along with facts in polynomial algebra, we show that the gradient matrix has sufficient rank to allow for partial span recovery.
  • We need only assume bounds on the first and second derivatives of the activation functions, as well as the fact that we can find inputs x ∈ Rn such that M (x) = 0 with good probability, and that the gradients of the network near certain points where the threshold evaluates to one are not arbitrarily small.
  • Previous span recovery algorithms heavily rely on the assumption that the gradient matrix is full rank and well-conditioned.
  • For some distribution D, it is assumed that Hf = x∼D ∇f (x)∇f (x) dx is a rank k matrix with a minimum non-zero singular value bounded below by α and the number of gradient or function evaluations needed depends inverse polynomially in α.
  • In this paper, when f (x) is a neural network, we provably show that Hf is a matrix of sufficiently high rank or large minimum non-zero singular value under mild assumptions, using tools in polynomial algebra.
  • We will demonstrate that even for such functions with a binary threshold placed at the end, giving us minimal information about the network, we can still achieve full span recovery of the weight matrix A, albeit with the cost of an -
Conclusion
  • When applying span recovery for a given network, we first calculate the gradients analytically via auto-differentiation at a fixed number of sample points distributed according to a standard Gaussian.
  • On the real dataset MNIST, we demonstrate the utility of span recovery algorithms as an attack to fool neural networks to misclassify noisy inputs.
Summary
  • Consider the general framework in which we are given an unknown function f : Rn → R, and we want to learn properties about this function given only access to the value f (x) for different inputs x.
  • Hardt & Woodruff (2013) gives an adaptive approximate span recovery algorithm with poly(n) samples under the assumption that the function g satisfies a norm-preserving condition, which is restrictive and need not hold for the deep neural networks we consider here.
  • For deep networks M (x) : Rn → R with ReLU activation functions, we prove that we can recover a subspace V ⊂ Span(A) of dimension at least k/2 with polynomially many non-adaptive queries.1 First, we use a volume bounding technique to show that a ReLU network has sufficiently large piece-wise linear sections and that gradient information can be derived from function evaluations.
  • Novel combinatorial analysis of the sign patterns of the ReLU network along with facts in polynomial algebra, we show that the gradient matrix has sufficient rank to allow for partial span recovery.
  • We need only assume bounds on the first and second derivatives of the activation functions, as well as the fact that we can find inputs x ∈ Rn such that M (x) = 0 with good probability, and that the gradients of the network near certain points where the threshold evaluates to one are not arbitrarily small.
  • Previous span recovery algorithms heavily rely on the assumption that the gradient matrix is full rank and well-conditioned.
  • For some distribution D, it is assumed that Hf = x∼D ∇f (x)∇f (x) dx is a rank k matrix with a minimum non-zero singular value bounded below by α and the number of gradient or function evaluations needed depends inverse polynomially in α.
  • In this paper, when f (x) is a neural network, we provably show that Hf is a matrix of sufficiently high rank or large minimum non-zero singular value under mild assumptions, using tools in polynomial algebra.
  • We will demonstrate that even for such functions with a binary threshold placed at the end, giving us minimal information about the network, we can still achieve full span recovery of the weight matrix A, albeit with the cost of an -
  • When applying span recovery for a given network, we first calculate the gradients analytically via auto-differentiation at a fixed number of sample points distributed according to a standard Gaussian.
  • On the real dataset MNIST, we demonstrate the utility of span recovery algorithms as an attack to fool neural networks to misclassify noisy inputs.
Funding
  • ACKNOWLEDGMENTS The authors Rajesh Jayaram and David Woodruff would like to thank the partial support by the National Science Foundation under Grant No CCF-1815840
Reference
  • IG Abrahamson et al. Orthant probabilities for the quadrivariate normal distribution. The Annals of Mathematical Statistics, 35(4):1685–1703, 1964.
    Google ScholarLocate open access versionFindings
  • Sanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. Provable bounds for learning some deep representations. In International Conference on Machine Learning, pp. 584–592, 2014.
    Google ScholarLocate open access versionFindings
  • Ralph Hoyt Bacon. Approximations to multivariate normal orthant probabilities. The Annals of Mathematical Statistics, 34(1):191–198, 196ISSN 00034851. URL http://www.jstor.org/stable/2991294.
    Locate open access versionFindings
  • Ainesh Bakshi, Rajesh Jayaram, and David P Woodruff. Learning two layer rectified neural networks in polynomial time. In Alina Beygelzimer and Daniel Hsu (eds.), Proceedings of the Thirty-Second Conference on Learning Theory, volume 99 of Proceedings of Machine Learning Research, pp. 195–268, Phoenix, USA, 25–28 Jun 2019. PMLR. URL http://proceedings.mlr.press/v99/bakshi19a.html.
    Locate open access versionFindings
  • Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. Zoo: Zeroth order optimization based blackbox attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15–26. ACM, 2017.
    Google ScholarLocate open access versionFindings
  • Albert Cohen, Ingrid Daubechies, Ronald DeVore, Gerard Kerkyacharian, and Dominique Picard. Capturing ridge functions in high dimensions from point queries. Constructive Approximation, 35(2):225–243, 2012.
    Google ScholarLocate open access versionFindings
  • Nicholas Cook et al. Lower bounds for the smallest singular value of structured random matrices. The Annals of Probability, 46(6):3442–3500, 2018.
    Google ScholarLocate open access versionFindings
  • Josip Djolonga, Andreas Krause, and Volkan Cevher. High-dimensional gaussian process bandits. In Advances in Neural Information Processing Systems, pp. 1025–1033, 2013.
    Google ScholarLocate open access versionFindings
  • Massimo Fornasier, Karin Schnass, and Jan Vybiral. Learning functions of few arbitrary linear parameters in high dimensions. Foundations of Computational Mathematics, 12(2):229–262, 2012.
    Google ScholarLocate open access versionFindings
  • Rong Ge, Rohith Kuditipudi, Zhize Li, and Xiang Wang. Learning two-layer neural networks with symmetric inputs. In International Conference on Learning Representations, 2019.
    Google ScholarLocate open access versionFindings
  • Surbhi Goel and Adam R. Klivans. Learning neural networks with two nonlinear layers in polynomial time. In Alina Beygelzimer and Daniel Hsu (eds.), Proceedings of the Thirty-Second Conference on Learning Theory, volume 99 of Proceedings of Machine Learning Research, pp. 1470–1499, Phoenix, USA, 25–28 Jun 2019. PMLR. URL http://proceedings.mlr.press/v99/goel19b.html.
    Locate open access versionFindings
  • Surbhi Goel, Varun Kanade, Adam Klivans, and Justin Thaler. Reliably learning the relu in polynomial time. In Conference on Learning Theory, pp. 1004–1042, 2017.
    Google ScholarLocate open access versionFindings
  • Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
    Findings
  • H Tracy Hall, Leslie Hogben, Ryan Martin, and Bryan Shader. Expected values of parameters associated with the minimum rank of a graph. Linear Algebra and its Applications, 433(1):101–117, 2010.
    Google ScholarLocate open access versionFindings
  • Moritz Hardt and David P Woodruff. How robust are linear sketches to adaptive inputs? In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pp. 121–130. ACM, 2013.
    Google ScholarLocate open access versionFindings
  • Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, and Pieter Abbeel. Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284, 2017.
    Findings
  • Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, and Nicolas Papernot. High-fidelity extraction of neural network models. arXiv preprint arXiv:1909.01838, 2019.
    Findings
  • Majid Janzamin, Hanie Sedghi, and Anima Anandkumar. Beating the perils of non-convexity: Guaranteed training of neural networks using tensor methods. arXiv preprint arXiv:1506.08473, 2015.
    Findings
  • B. Laurent and P. Massart. Adaptive estimation of a quadratic functional by model selection. Ann. Statist., 28(5):1302– 1338, 10 2000. doi: 10.1214/aos/1015957395. URL https://doi.org/10.1214/aos/1015957395.
    Locate open access versionFindings
  • Ker-Chau Li. Sliced inverse regression for dimension reduction. Journal of the American Statistical Association, 86 (414):316–327, 1991. ISSN 01621459. URL http://www.jstor.org/stable/2290563.
    Locate open access versionFindings
  • Tetsuhisa Miwa, AJ Hayter, and Satoshi Kuriki. The evaluation of general non-centred orthant probabilities. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 65(1):223–234, 2003.
    Google ScholarLocate open access versionFindings
  • Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pp. 506–519. ACM, 2017.
    Google ScholarLocate open access versionFindings
  • Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
    Findings
  • Hemant Tyagi and Volkan Cevher. Learning non-parametric basis independent models from point queries via low-rank methods. Applied and Computational Harmonic Analysis, 37(3):389–412, 2014.
    Google ScholarLocate open access versionFindings
  • Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint arXiv:1011.3027, 2010.
    Findings
  • Yingcun Xia, Howell Tong, Wai Keungxs Li, and Li-Xing Zhu. An adaptive estimation of dimension reduction space. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 64(3):363–410, 2002.
    Google ScholarLocate open access versionFindings
  • Qiuyi Zhang, Rina Panigrahy, Sushant Sachdeva, and Ali Rahimi. Electron-proton dynamics in deep learning. arXiv preprint arXiv:1702.00458, pp. 1–31, 2017.
    Findings
  • (2000)), we have Pr[
    Google ScholarFindings
  • 1. It follows that w, d i=1 (Wi by concentration results for χ2 distributions Laurent & Massart (2000). Now, fix any sign pattern Si ⊆ [ki] for the i-th layer Mi(x) = φ(Wi(φ(... (φ(Ax)... ), and let S = (S1, S2,..., Sd+1). We note that we can enforce the constraint that for an input x ∈ Rn, the sign pattern of Mi(x) is precisely Si. To see this, note that after conditioning on a sign pattern for each layer, the entire network becomes linear. Thus each constraint that (Wi)j,∗, Mi+1(x) ≥ 0 or (Wi)j,∗, Mi+1(x) ≤ 0 can be enforced as a linear combination of the coordinates of x.
    Google ScholarLocate open access versionFindings
  • So create a variable xi for each coefficient i ∈ [ck] in this linear combination, and let fj(x) be the linear function of the xis which gives the value of the j-th coordinate of w. Then f (x) = (f1(x),..., fk(x)) is a k-tuple of polynomials, each in ck-variables, where each polynomial has degree 1. By Theorem 4.1 of Hall et al. (2010), it follows that the number of sign patterns which contain at most k/2 non-zero entries is at most ck+k/2 ck
    Google ScholarLocate open access versionFindings
  • O(n log(1/γ)) (see Lemma 1 Laurent & Massart (2000)), so by a union bound both of these occur with probability γ/2. Now since (c∗ − c)x 2 ≤ 02−N (after rescaling N by a factor of log( gi 2) = O(log(n))), and since 2N is also an upper bound on the spectral norm of the Hessian of σ by construction, it follows that ∇gi σV (cgi) > η/2.
    Google ScholarFindings
Your rating :
0

 

Tags
Comments