Robust and efficient subsampling algorithms for massive data logistic regression

JOURNAL OF APPLIED STATISTICS(2023)

引用 0|浏览3
暂无评分
摘要
Datasets that are big with regard to their volume, variety and velocity are becoming increasingly common. However, limitations in computer processing can often restrict analysis performed on them. Nonuniform subsampling methods are effective in reducing computational loads for massive data. However, the variance of the estimator of nonuniform subsampling methods becomes large when the subsampling probabilities are highly heterogenous. To this end, we develop two new algorithms to improve the estimation method for massive data logistic regression based on a chosen hard threshold value and combining subsamples, respectively. The basic idea of the hard threshold method is to carefully select a threshold value and then replace subsampling probabilities lower than the threshold value with the chosen value itself. The main idea behind the combining subsamples method is to better exploit information in the data without hitting the computation bottleneck by generating many subsamples and then combining estimates constructed from the subsamples. The combining subsamples method obtains the standard error of the parameter estimator without estimating the sandwich matrix, which provides convenience for statistical inference in massive data, and can significantly improve the estimation efficiency. Asymptotic properties of the resultant estimators are established. Simulations and analysis of real data are conducted to assess and showcase the practical performance of the proposed methods.
更多
查看译文
关键词
efficient subsampling algorithms,massive data,regression
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要