Probabilistic Best Subset Selection by Gradient-Based Optimization

arxiv(2020)

引用 0|浏览77
暂无评分
摘要
In high-dimensional statistics, variable selection is an optimization problem aiming to recover the latent sparse pattern from all possible covariate combinations. In this paper, we transform the optimization problem from a discrete space to a continuous one via reparameterization. The new objective function is a reformulation of the exact $L_0$-regularized regression problem (a.k.a. best subset selection). In the framework of stochastic gradient descent, we propose a family of unbiased and efficient gradient estimators that are used to optimize the best subset selection objective and its variational lower bound. Under this family, we identify the estimator with non-vanishing signal-to-noise ratio and uniformly minimum variance. Theoretically we study the general conditions under which the method is guaranteed to converge to the ground truth in expectation. In a wide variety of synthetic and real data sets, the proposed method outperforms existing ones based on penalized regression or best subset selection, in both sparse pattern recovery and out-of-sample prediction. Our method can find the true regression model from thousands of covariates in a couple of seconds.
更多
查看译文
关键词
selection,optimization,gradient-based
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要