Stronger calibration lower bounds via sidestepping

ACM Symposium on Theory of Computing(2021)

引用 11|浏览35
暂无评分
摘要
ABSTRACTWe consider an online binary prediction setting where a forecaster observes a sequence of T bits one by one. Before each bit is revealed, the forecaster predicts the probability that the bit is 1. The forecaster is called well-calibrated if for each p ∈ [0, 1], among the np bits for which the forecaster predicts probability p, the actual number of ones, mp, is indeed equal to p · np. The calibration error, defined as ∑p |mp − p np|, quantifies the extent to which the forecaster deviates from being well-calibrated. It has long been known that an O(T2/3) calibration error is achievable even when the bits are chosen adversarially, and possibly based on the previous predictions. However, little is known on the lower bound side, except an Ω(√T) bound that follows from the trivial example of independent fair coin flips. In this paper, we prove an Ω(T0.528) bound on the calibration error, which is the first super-√T lower bound for this setting to the best of our knowledge. The technical contributions of our work include two lower bound techniques, early stopping and sidestepping, which circumvent the obstacles that have previously hindered strong calibration lower bounds. We also propose an abstraction of the prediction setting, termed the Sign-Preservation game, which may be of independent interest. This game has a much smaller state space than the full prediction setting and allows simpler analyses. The Ω(T0.528) lower bound follows from a general reduction theorem that translates lower bounds on the game value of Sign-Preservation into lower bounds on the calibration error.
更多
查看译文
关键词
online prediction,calibration
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要