Adversarially Robust Optimization With Gaussian Processes

ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018)(2018)

引用 125|浏览69
暂无评分
摘要
In this paper, we consider the problem of Gaussian process (GP) optimization with an added robustness requirement: The returned point may be perturbed by an adversary, and we require the function value to remain as high as possible even after this perturbation. This problem is motivated by settings in which the underlying functions during optimization and implementation stages are different, or when one is interested in finding an entire region of good inputs rather than only a single point. We show that standard GP optimization algorithms do not exhibit the desired robustness properties, and provide a novel confidence-bound based algorithm STABLEOPT for this purpose. We rigorously establish the required number of samples for STABLEOPT to find a near-optimal point, and we complement this guarantee with an algorithm-independent lower bound. We experimentally demonstrate several potential applications of interest using real-world data sets, and we show that STABLEOPT consistently succeeds in finding a stable maximizer where several baseline methods fail.
更多
查看译文
关键词
optimization problems,gaussian processes,gaussian process,parallel optimization,function value,single point
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要