Fast Rates for Online Gradient Descent Without Strong Convexity via Hoffman's Bound.

arXiv: Learning(2018)

引用 24|浏览15
暂无评分
摘要
Hoffmanu0027s classical result gives a bound on the distance of a point from a convex and compact polytope in terms of the magnitude of violation of the constraints. Recently, several results showed that Hoffmanu0027s bound can be used to derive strongly-convex-like rates for first-order methods for convex optimization of curved, though not strongly convex, functions, over polyhedral sets. In this work, we use this classical result for the first time to obtain faster rates for textit{online convex optimization} over polyhedral sets with curved convex, though not strongly convex, loss functions. Mainly, we show that under several reasonable assumptions on the data, the standard textit{Online Gradient Descent} (OGD) algorithm guarantees logarithmic regret. To the best of our knowledge, the only previous algorithm to achieve logarithmic regret in the considered settings is the textit{Online Newton Step} algorithm which requires quadratic (in the dimension) memory and to solve a linear system on each iteration, which greatly limits its applicability to large-scale problems. We also show that in the corresponding stochastic convex optimization setting, Stochastic Gradient Descent achieves convergence rate of $1/t$, matching the strongly-convex case.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要