Online Robust Regression via SGD on the l1 loss

NIPS 2020(2020)

引用 33|浏览65
暂无评分
摘要
We consider the robust linear regression problem in the online setting where we have access to the data in a streaming manner, one data point after the other. More specifically, for a true parameter θ^*, we consider the corrupted Gaussian linear model y = ⟨ x , θ^* ⟩ + ε + b where the adversarial noise b can take any value with probability η and equals zero otherwise. We consider this adversary to be oblivious (i.e., b independent of the data) since this is the only contamination model under which consistency is possible. Current algorithms rely on having the whole data at hand in order to identify and remove the outliers. In contrast, we show in this work that stochastic gradient descent on the ℓ_1 loss converges to the true parameter vector at a Õ( 1 / (1 - η)^2 n ) rate which is independent of the values of the contaminated measurements. Our proof relies on the elegant smoothing of the non-smooth ℓ_1 loss by the Gaussian data and a classical non-asymptotic analysis of Polyak-Ruppert averaged SGD. In addition, we provide experimental evidence of the efficiency of this simple and highly scalable algorithm.
更多
查看译文
关键词
online robust regression,l1 loss,sgd
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要