LINNA: Likelihood Inference Neural Network Accelerator

JOURNAL OF COSMOLOGY AND ASTROPARTICLE PHYSICS(2023)

引用 3|浏览6
暂无评分
摘要
Bayesian posterior inference of modern multi-probe cosmological analyses incurs massive computational costs. For instance, depending on the combinations of probes, a single posterior inference for the Dark Energy Survey (DES) data had a wall-clock time that ranged from 1 to 21 days using a state-of-the-art computing cluster with 100 cores. These computational costs have severe environmental impacts and the long wall-clock time slows scientific productivity. To address these difficulties, we introduce LINNA: the Likelihood Inference Neural Network Accelerator. Relative to the baseline DES analyses, LINNA reduces the computational cost associated with posterior inference by a factor of 8-50. If applied to the first-year cosmological analysis of Rubin Observatory's Legacy Survey of Space and Time (LSST Y1), we conservatively estimate that LINNA will save more than U.S. $300, 000 on energy costs, while simultaneously reducing CO2 emission by 2, 400 tons. To accomplish these reductions, LINNA automatically builds training data sets, creates neural network emulators, and produces a Markov chain that samples the posterior. We explicitly verify that LINNA accurately reproduces the first-year DES (DES Y1) cosmological constraints derived from a variety of different data vectors with our default code settings, without needing to retune the algorithm every time. Further, we find that LINNA is sufficient for enabling accurate and efficient sampling for LSST Y10 multi-probe analyses. We make LINNA publicly available at https://github.com/chto/linna, to enable others to perform fast and accurate posterior inference in contemporary cosmological analyses.
更多
查看译文
关键词
Bayesian reasoning,Machine learning,Statistical sampling techniques,cosmological parameters from LSS
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要