Support estimation in high-dimensional heteroscedastic mean regression

Philipp Hermann,Hajo Holzmann

arxiv(2020)

引用 0|浏览0
暂无评分
摘要
A current strand of research in high-dimensional statistics deals with robustifying the available methodology with respect to deviations from the pervasive light-tail assumptions. In this paper we consider a linear mean regression model with random design and potentially heteroscedastic, heavy-tailed errors, and investigate support estimation in this framework. We use a strictly convex, smooth variant of the Huber loss function with tuning parameter depending on the parameters of the problem, as well as the adaptive LASSO penalty for computational efficiency. For the resulting estimator we show sign-consistency and optimal rates of convergence in the $\ell_\infty$ norm as in the homoscedastic, light-tailed setting. In our analysis, we have to deal with the issue that the support of the target parameter in the linear mean regression model and its robustified version may differ substantially even for small values of the tuning parameter of the Huber loss function. Simulations illustrate the favorable numerical performance of the proposed methodology.
更多
查看译文
关键词
estimation,support,high-dimensional
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要