谷歌浏览器插件
订阅小程序
在清言上使用

Inertial Newton Algorithms Avoiding Strict Saddle Points

Journal of Optimization Theory and Applications(2023)

引用 0|浏览6
暂无评分
摘要
We study the asymptotic behavior of second-order algorithms mixing Newton’s method and inertial gradient descent in non-convex landscapes. We show that, despite the Newtonian behavior of these methods, they almost always escape strict saddle points. We also evidence the role played by the hyper-parameters of these methods in their qualitative behavior near critical points. The theoretical results are supported by numerical illustrations.
更多
查看译文
关键词
Non-convex optimization,Algorithms for machine learning,Dynamical systems,Convergence analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要