A prediction rigidity formalism for low-cost uncertainties in trained neural networks
CoRR(2024)
摘要
Regression methods are fundamental for scientific and technological
applications. However, fitted models can be highly unreliable outside of their
training domain, and hence the quantification of their uncertainty is crucial
in many of their applications. Based on the solution of a constrained
optimization problem, we propose "prediction rigidities" as a method to obtain
uncertainties of arbitrary pre-trained regressors. We establish a strong
connection between our framework and Bayesian inference, and we develop a
last-layer approximation that allows the new method to be applied to neural
networks. This extension affords cheap uncertainties without any modification
to the neural network itself or its training procedure. We show the
effectiveness of our method on a wide range of regression tasks, ranging from
simple toy models to applications in chemistry and meteorology.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要