Linear Convergence of Accelerated Stochastic Gradient Descent for Nonconvex Nonsmooth Optimization
arXiv: Optimization and Control, Volume abs/1704.07953, 2017.
EI
Abstract:
In this paper, we study the stochastic gradient descent (SGD) method for the nonconvex nonsmooth optimization, and propose an accelerated SGD method by combining the variance reduction technique with Nesterovu0027s extrapolation technique. Moreover, based on the local error bound condition, we establish the linear convergence of our metho...More
Code:
Data:
Tags
Comments