Adversarial Robustness Guarantees for Gaussian Processes

arxiv(2022)

引用 0|浏览12
暂无评分
摘要
Gaussian processes (GPs) enable principled computation of model uncertainty, making them attractive for safety-critical applications. Such scenarios demand that GP decisions are not only accurate, but also robust to perturbations. In this paper we present a frame-work to analyse adversarial robustness of GPs, defined as invariance of the model's decision to bounded perturbations. Given a compact subset of the input space T subset of Rd, a point x* and a GP, we provide provable guarantees of adversarial robustness of the GP by comput-ing lower and upper bounds on its prediction range in T. We develop a branch-and-bound scheme to refine the bounds and show, for any epsilon > 0, that our algorithm is guaranteed to converge to values epsilon-close to the actual values in finitely many iterations. The algorithm is anytime and can handle both regression and classification tasks, with analytical formulation for most kernels used in practice. We evaluate our methods on a collection of synthetic and standard benchmark data sets, including SPAM, MNIST and FashionMNIST. We study the effect of approximate inference techniques on robustness and demonstrate how our method can be used for interpretability. Our empirical results suggest that the adversarial robustness of GPs increases with accurate posterior estimation.
更多
查看译文
关键词
robustness,gaussian processes
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要