Random features models: a way to study the success of naive imputation
arxiv(2024)
摘要
Constant (naive) imputation is still widely used in practice as this is a
first easy-to-use technique to deal with missing data. Yet, this simple method
could be expected to induce a large bias for prediction purposes, as the
imputed input may strongly differ from the true underlying data. However,
recent works suggest that this bias is low in the context of high-dimensional
linear predictors when data is supposed to be missing completely at random
(MCAR). This paper completes the picture for linear predictors by confirming
the intuition that the bias is negligible and that surprisingly naive
imputation also remains relevant in very low dimension.To this aim, we consider
a unique underlying random features model, which offers a rigorous framework
for studying predictive performances, whilst the dimension of the observed
features varies.Building on these theoretical results, we establish
finite-sample bounds on stochastic gradient (SGD) predictors applied to
zero-imputed data, a strategy particularly well suited for large-scale
learning.If the MCAR assumption appears to be strong, we show that similar
favorable behaviors occur for more complex missing data scenarios.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要