Optimal Dynamic Regret in Proper Online Learning with Strongly Convex Losses and Beyond

INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151(2022)

引用 10|浏览8
暂无评分
摘要
We study the framework of universal dynamic regret minimization with strongly convex losses. We answer an open problem in (Baby and Wang, 2021) by showing that in a proper learning setup, Strongly Adaptive algorithms can achieve the near optimal dynamic regret of (O) over tilde (d(1/3)n(1/3) TV[u(1:n)](2/3) V d) against any comparator sequence u(1); : : :; u(n) simultaneously, where n is the time horizon and TV[u1:n] is the Total Variation of comparator. These results are facilitated by exploiting a number of new structures imposed by the KKT conditions that were not considered in (Baby and Wang, 2021) which also lead to other improvements over their results such as: (a) handling non-smooth losses and (b) improving the dimension dependence on regret. Further, we also derive near optimal dynamic regret rates for the special case of proper online learning with exp-concave losses and an L1 constrained decision set.
更多
查看译文
关键词
optimal dynamic regret,proper online learning,losses
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要