Learning One-hidden-layer Neural Networks with Landscape Design

international conference on learning representations(2017)

引用 288|浏览138
暂无评分
摘要
We consider the problem of learning a one-hidden-layer neural network: we assume the input x∈ℝ^d is from Gaussian distribution and the label y = a^⊤σ(Bx) + ξ, where a is a nonnegative vector in ℝ^m with m≤ d, B∈ℝ^m× d is a full-rank weight matrix, and ξ is a noise vector. We first give an analytic formula for the population risk of the standard squared loss and demonstrate that it implicitly attempts to decompose a sequence of low-rank tensors simultaneously. Inspired by the formula, we design a non-convex objective function G(·) whose landscape is guaranteed to have the following properties: 1. All local minima of G are also global minima. 2. All global minima of G correspond to the ground truth parameters. 3. The value and gradient of G can be estimated using samples. With these properties, stochastic gradient descent on G provably converges to the global minimum and learn the ground-truth parameters. We also prove finite sample complexity result and validate the results by simulations.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要