A Neural Framework for Generalized Causal Sensitivity Analysis
ICLR 2024(2023)
摘要
Unobserved confounding is common in many applications, making causal
inference from observational data challenging. As a remedy, causal sensitivity
analysis is an important tool to draw causal conclusions under unobserved
confounding with mathematical guarantees. In this paper, we propose NeuralCSA,
a neural framework for generalized causal sensitivity analysis. Unlike previous
work, our framework is compatible with (i) a large class of sensitivity models,
including the marginal sensitivity model, f-sensitivity models, and Rosenbaum's
sensitivity model; (ii) different treatment types (i.e., binary and
continuous); and (iii) different causal queries, including (conditional)
average treatment effects and simultaneous effects on multiple outcomes. The
generality of \frameworkname is achieved by learning a latent distribution
shift that corresponds to a treatment intervention using two conditional
normalizing flows. We provide theoretical guarantees that NeuralCSA is able to
infer valid bounds on the causal query of interest and also demonstrate this
empirically using both simulated and real-world data.
更多查看译文
关键词
Causal machine learning,treatment effect estimation,sensitivity analysis,unobserved confounding,uncertainty estimation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要