Scale-invariant unconstrained online learning

Theoretical Computer Science(2020)

引用 8|浏览37
暂无评分
摘要
We consider an online supervised learning problem, in which both the instances (input vectors) and the comparator (weight vector) are unconstrained. We exploit a natural scale invariance symmetry in our unconstrained setting: the predictions of the optimal comparator are invariant under any linear transformation of the instances. Our goal is to design online algorithms which also enjoy this property, i.e. are scale-invariant. We start with the case of coordinate-wise invariance, in which the individual coordinates (features) can be arbitrarily rescaled. We give an algorithm, which achieves essentially optimal regret bound in this setup, expressed by means of a coordinate-wise scale-invariant norm of the comparator. We then study general invariance with respect to arbitrary linear transformations. We first give a negative result, showing that no algorithm can achieve a meaningful bound in terms of scale-invariant norm of the comparator in the worst case. Next, we compliment this result with a positive one, providing an algorithm which “almost” achieves the desired bound, incurring only a logarithmic overhead in terms of the relative size of the instances.
更多
查看译文
关键词
Online learning,Online convex optimization,Scale invariance,Unconstrained online learning,Linear classification,Regret bound
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要