Theoretical Analysis of Leave-one-out Cross Validation for Non-differentiable Penalties under High-dimensional Settings
arxiv(2024)
摘要
Despite a large and significant body of recent work focused on estimating the
out-of-sample risk of regularized models in the high dimensional regime, a
theoretical understanding of this problem for non-differentiable penalties such
as generalized LASSO and nuclear norm is missing. In this paper we resolve this
challenge. We study this problem in the proportional high dimensional regime
where both the sample size n and number of features p are large, and n/p and
the signal-to-noise ratio (per observation) remain finite. We provide finite
sample upper bounds on the expected squared error of leave-one-out
cross-validation (LO) in estimating the out-of-sample risk. The theoretical
framework presented here provides a solid foundation for elucidating empirical
findings that show the accuracy of LO.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要