Filtering Bayesian optimization approach in weakly specified search space
Knowledge and Information Systems, pp. 1-29, 2018.
Our method contributes toward the current Bayesian optimization framework for many practical applications and can be readily used with any acquisition function which is induced by a Gaussian process
Bayesian optimization (BO) has recently emerged as a powerful and flexible tool for hyper-parameter tuning and more generally for the efficient global optimization of expensive black-box functions. Systems implementing BO have successfully solved difficult problems in automatic design choices and machine learning hyper-parameters tunings....More
PPT (Upload PPT)
- Global optimization is fundamental to diverse real-world problems where parameter settings and design choices are pivotal—as an example, in algorithm hyper-parameter tuning [39,42] V.
- Existing Bayesian optimization approaches are restricted to a pre-defined and fixed space of search wherein it is assumed to contain the global optimum.
- To address the weakly specified problem for Bayesian optimization, we propose a filtering expansion strategy that starts from an initial region and gradually expands it to find the optimum.
- Given the fixed space and dimension, the regret is even worse if we have a smaller budget of evaluations T —the case in Bayesian optimization.
- If we overspecify a search space by setting it too big, the optimization is not efficient and can be intractable in high dimension: given the limited number of evaluations, as discussed in Sect.
- Given a small evaluation budget in BO, it is not efficient for optimization to define an arbitrarily large search space
- 4 Filtering expansion strategy for Bayesian optimization in weakly specified space
- We mean that this region is placed at a sufficiently good region so that the optimization can expand and reach to the optimum location under limited evaluation budget, but this initial region need not contain the global optimum.
- Let xt be our choice at an iteration t, the instantaneous regret, used in standard Bayesian optimization setting, is transformed to the case of expandable spaces as follows rt = f ∗ − f = f ∗ − ft∗ + ft∗ − f .
- We summarize the proposed filtering expansion strategy for Bayesian optimization (FBO) under weakly specified space in Algorithm 2.
- Algorithm 2 Filtering expansion strategy for Bayesian optimization (FBO) under weakly specified space setting.
- We study the regret bound of Bayesian optimization algorithms under a weakly specified space.
- We below derive the regret bound for the Bayesian optimization algorithms under the weakly specified space.
- The derived regret bound in Theorem 1 is general for multiple BO algorithms considering the situation that the global optimum may not include in the pre-defined space.
- We note that other expanding schemes in Bayesian optimization, including volume doubling, share a similar form of regret bounds in Eqs.
- It makes sense to define an initial region X0 from this default setting and consider it as a weakly specified space.
- The search space containing the optimal value for the chosen elements is unknown and hard to specify due to limited knowledge.
- We have presented a new strategy for Bayesian optimization in weakly specified search space.
- We will estimate and utilize the partial derivative information to move, expand or shrink the search space toward the global optimum location
- Develops an efficient algorithm for this strategy and derive its regret bound
- Focuses on a scenario in BO where the search space is “weakly known” by the domain experts
- Proposes a filtering expansion strategy that starts from an initial region and gradually expands it to find the optimum
- Demonstrates the efficacy of our approach in expanding a search space by optimizing several benchmark functions and hyper-parameter tuning of multi-label classification algorithm