Online Quantile Regression
arxiv(2024)
摘要
This paper tackles the challenge of integrating sequentially arriving data
within the quantile regression framework, where the number of covariates is
allowed to grow with the number of observations, the horizon is unknown, and
memory is limited. We employ stochastic sub-gradient descent to minimize the
empirical check loss and study its statistical properties and regret
performance. In our analysis, we unveil the delicate interplay between updating
iterates based on individual observations versus batches of observations,
revealing distinct regularity properties in each scenario. Our method ensures
long-term optimal estimation irrespective of the chosen update strategy.
Importantly, our contributions go beyond prior works by achieving
exponential-type concentration inequalities and attaining optimal regret and
error rates that exhibit only short-term sensitivity to initial errors. A key
insight from our study is the delicate statistical analyses and the revelation
that appropriate stepsize schemes significantly mitigate the impact of initial
errors on subsequent errors and regrets. This underscores the robustness of
stochastic sub-gradient descent in handling initial uncertainties, emphasizing
its efficacy in scenarios where the sequential arrival of data introduces
uncertainties regarding both the horizon and the total number of observations.
Additionally, when the initial error rate is well controlled, there is a
trade-off between short-term error rate and long-term optimality. Due to the
lack of delicate statistical analysis for square loss, we also briefly discuss
its properties and proper schemes. Extensive simulations support our
theoretical findings.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要