Escaping Saddle Points in Constrained Optimization.

ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018)(2018)

引用 64|浏览29
暂无评分
摘要
In this paper, we study the problem of escaping from saddle points in smooth nonconvex optimization problems subject to a convex set C. We propose a generic framework that yields convergence to a second-order stationary point of the problem, if the convex set C is simple for a quadratic objective function. Specifically, our results hold if one can find a rho-approximate solution of a quadratic program subject to C in polynomial time, where rho < 1 is a positive constant that depends on the structure of the set C. Under this condition, we show that the sequence of iterates generated by the proposed framework reaches an (epsilon, gamma)-second order stationary point (SOSP) in at most O(max{epsilon(-2), rho(-3) gamma(-3)}) iterations. We further characterize the overall complexity of reaching an SOSP when the convex set C can be written as a set of quadratic constraints and the objective function Hessian has a specific structure over the convex set C. Finally, we extend our results to the stochastic setting and characterize the number of stochastic gradient and Hessian evaluations to reach an (epsilon, gamma)-SOSP.
更多
查看译文
关键词
polynomial time,saddle points,a set,constrained optimization,convex set,arithmetic operations,stationary point,quadratic program
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要