Training (Overparametrized) Neural Networks in Near-Linear Time

ITCS(2020)

引用 33|浏览83
暂无评分
摘要
The slow convergence rate and pathological curvature issues of first-order gradient methods for training deep neural networks, initiated an ongoing effort for developing faster 𝑠𝑒𝑐𝑜𝑛𝑑-𝑜𝑟𝑑𝑒𝑟 optimization algorithms beyond SGD, without compromising the generalization error. Despite their remarkable convergence rate (𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡 of the training batch size n), second-order algorithms incur a daunting slowdown in the 𝑐𝑜𝑠𝑡 𝑝𝑒𝑟 𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 (inverting the Hessian matrix of the loss function), which renders them impractical. Very recently, this computational overhead was mitigated by the works of [ZMG19,CGH+19, yielding an O(mn^2)-time second-order algorithm for training two-layer overparametrized neural networks of polynomial width m. We show how to speed up the algorithm of [CGH+19], achieving an Õ(mn)-time backpropagation algorithm for training (mildly overparametrized) ReLU networks, which is near-linear in the dimension (mn) of the full gradient (Jacobian) matrix. The centerpiece of our algorithm is to reformulate the Gauss-Newton iteration as an ℓ_2-regression problem, and then use a Fast-JL type dimension reduction to 𝑝𝑟𝑒𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 the underlying Gram matrix in time independent of M, allowing to find a sufficiently good approximate solution via 𝑓𝑖𝑟𝑠𝑡-𝑜𝑟𝑑𝑒𝑟 conjugate gradient. Our result provides a proof-of-concept that advanced machinery from randomized linear algebra – which led to recent breakthroughs in 𝑐𝑜𝑛𝑣𝑒𝑥 𝑜𝑝𝑡𝑖𝑚𝑖𝑧𝑎𝑡𝑖𝑜𝑛 (ERM, LPs, Regression) – can be carried over to the realm of deep learning as well.
更多
查看译文
关键词
neural networks,training,near-linear
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要