Tight Convergence Rate Bounds for Optimization Under Power Law Spectral Conditions

JOURNAL OF MACHINE LEARNING RESEARCH(2024)

引用 0|浏览2
暂无评分
摘要
Performance of optimization on quadratic problems sensitively depends on the lowlying part of the spectrum. For large (effectively infinite-dimensional) problems, this part of the spectrum can often be naturally represented or approximated by power law distributions, resulting in power law convergence rates for iterative solutions of these problems by gradient -based algorithms. In this paper, we propose a new spectral condition providing tighter upper bounds for problems with power law optimization trajectories. We use this condition to build a complete picture of upper and lower bounds for a wide range of optimization algorithms - Gradient Descent, Steepest Descent, Heavy Ball, and Conjugate Gradients - with an emphasis on the underlying schedules of learning rate and momentum. In particular, we demonstrate how an optimally accelerated method, its schedule, and convergence upper bound can be obtained in a unified manner for a given shape of the spectrum. Also, we provide first proofs of tight lower bounds for convergence rates of Steepest Descent and Conjugate Gradients under spectral power laws with general exponents. Our experiments show that the obtained convergence bounds and acceleration strategies are not only relevant for exactly quadratic optimization problems, but also fairly accurate when applied to the training of neural networks.
更多
查看译文
关键词
Gradient Descent,Steepest Descent,Heavy Ball,Conjugate Gradients,power-law spectrum,convergence rate,tight bounds,non-strongly-convex least squares,acceleration,neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要