Faster Gradient-Free Proximal Stochastic Methods for Nonconvex Nonsmooth Optimization

AAAI, 2019.

Cited by: 6|Bibtex|Views34
EI
Other Links: dblp.uni-trier.de|academic.microsoft.com|arxiv.org

Abstract:

Proximal gradient method has been playing an important role to solve many machine learning tasks, especially for the nonsmooth problems. However, in some machine learning problems such as the bandit model and the black-box learning problem, proximal gradient method could fail because the explicit gradients of these problems are difficult ...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments