On The Fast Convergence Of Random Perturbations Of The Gradient Flow

arXiv: Probability(2021)

引用 25|浏览4
暂无评分
摘要
We consider in this work small random perturbations (of multiplicative noise type) of the gradient flow. We prove that under mild conditions, when the potential function is a Morse function with additional strong saddle condition, the perturbed gradient flow converges to the neighborhood of local minimizers in O(ln(epsilon(-1))) time on the average, where epsilon is the scale of the random perturbation. Under a change of time scale, this indicates that for the diffusion process that approximates the stochastic gradient method, it takes (up to logarithmic factor) only a linear time of inverse stepsize to evade from all saddle points. This can be regarded as a manifestation of fast convergence of the discrete-time stochastic gradient method, the latter being used heavily in modern statistical machine learning.
更多
查看译文
关键词
Random perturbations of dynamical systems, saddle point, exit problem, stochastic gradient descent, diffusion approximation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要