Slow Kill for Big Data Learning.

CoRR(2023)

引用 0|浏览10
暂无评分
摘要
Big-data applications often involve a vast number of observations and features, creating new challenges for variable selection and parameter estimation. This paper presents a novel technique called “slow kill,” which utilizes nonconvex constrained optimization, adaptive $\ell _{2}$ -shrinkage, and increasing learning rates. The fact that the problem size can decrease during the slow kill iterations makes it particularly effective for large-scale variable screening. The interaction between statistics and optimization provides valuable insights into controlling quantiles, stepsize, and shrinkage parameters in order to relax the regularity conditions required to achieve the desired level of statistical accuracy. Experimental results on real and synthetic data show that slow kill outperforms state-of-the-art algorithms in various situations while being computationally efficient for large-scale data.
更多
查看译文
关键词
Top-down algorithms,sparsity,nonconvex optimization,nonasymtotic analysis,sub-Nyquist spectrum sensing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要