Linear Convergence of Accelerated Stochastic Gradient Descent for Nonconvex Nonsmooth Optimization

arXiv: Optimization and Control, Volume abs/1704.07953, 2017.

Cited by: 2|Bibtex|Views3
EI
Other Links: dblp.uni-trier.de|academic.microsoft.com|arxiv.org

Abstract:

In this paper, we study the stochastic gradient descent (SGD) method for the nonconvex nonsmooth optimization, and propose an accelerated SGD method by combining the variance reduction technique with Nesterovu0027s extrapolation technique. Moreover, based on the local error bound condition, we establish the linear convergence of our metho...More

Code:

Data:

Your rating :
0

 

Tags
Comments