Adapting To Smoothness: A More Universal Algorithm For Online Convex Optimization

THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE(2020)

引用 23|浏览89
暂无评分
摘要
We aim to design universal algorithms for online convex optimization, which can handle multiple common types of loss functions simultaneously. The previous state-of-the-art universal method has achieved the minimax optimality for general convex, exponentially concave and strongly convex loss functions. However, it remains an open problem whether smoothness can be exploited to further improve the theoretical guarantees. In this paper, we provide an affirmative answer by developing a novel algorithm, namely 1,1F0, which achieves 0 / L,,), 0 (d log L,,) and 0 (log L,,) regret hounds for the three types of loss functions respectively under the assumption of smoothness, where L, is the cumulative loss of the best comparator in hindsight, and d is dimensionality. Thus, our regret hounds are much tighter when the comparator has a small loss, and ensure the minimax optimality in the worst case. In addition, it is worth pointing out that LIFO is the first to achieve the O(log L) regret bound for strongly convex and smooth functions, which is tighter than the existing small -loss bound by an 0(d) factor.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要