Differentiable Expected Hypervolume Improvement for Parallel Multi-Objective Bayesian Optimization

NIPS 2020, 2020.

Cited by: 1|Bibtex|Views30
EI
Other Links: arxiv.org|dblp.uni-trier.de|academic.microsoft.com
Weibo:
Extending to noisy observations would be nontrivial, in the parallel case. Such an integration would be equivalent to noiseless q-Expected Hypervolume Improvement acquisition function computation with a batch size |P| + q, which would be prohibitively expensive since computation ...

Abstract:

In many real-world scenarios, decision makers seek to efficiently optimize multiple competing objectives in a sample-efficient fashion. Multi-objective Bayesian optimization (BO) is a common approach, but many existing acquisition functions do not have known analytic gradients and suffer from high computational overhead. We leverage rec...More
0
Introduction
  • The problem of optimizing multiple competing objectives is ubiquitous in scientific and engineering applications.
  • Evaluating the crash safety of an automobile design experimentally is expensive due to both the manufacturing time and the destruction of a vehicle.
  • In such a scenario, sample efficiency is paramount.
  • An automaker could manufacture multiple vehicle designs in parallel or a web service could deploy several control policies to different segments of traffic at the same time
Highlights
  • The problem of optimizing multiple competing objectives is ubiquitous in scientific and engineering applications
  • We demonstrate that, using modern GPU hardware and computing exact gradients, optimizing q-Expected Hypervolume Improvement acquisition function (qEHVI) is faster than existing state-of-the art methods in many practical scenarios
  • Our empirical evaluation shows that qEHVI outperforms state-of-the-art multi-objective Bayesian optimization (BO) algorithms using a fraction of their wall time
  • Leveraging differentiable programming and modern parallel hardware, we are able to efficiently optimize qEHVI via quasi second-order methods, for which we provide convergence guarantees
  • We demonstrate that our method achieves performance superior to that of state-of-the-art MO BO approaches
  • Extending to noisy observations would be nontrivial, in the parallel case. Such an integration would be equivalent to noiseless qEHVI computation with a batch size |P| + q, which would be prohibitively expensive since computation scales exponentially with the batch size
Methods
  • The authors empirically evaluate qEHVI on synthetic and real world optimization problems.
  • The authors compare qEHVI6 against existing state of the art methods7 including SMS-EGO8, PESMO8, and analytic EHVI [64] with gradients6.
  • The authors compare against a novel extension of ParEGO [39] to support parallel evaluation and constraints, neither of which have been done before to the knowledge; the authors call this method qPAREGO6.
Results
  • The authors' empirical evaluation shows that qEHVI outperforms state-of-the-art multi-objective BO algorithms using a fraction of their wall time.
  • The authors demonstrate that the method achieves performance superior to that of state-of-the-art MO BO approaches
Conclusion
  • Practical, and efficient algorithm for parallel, constrained MO BO.
  • Extending to noisy observations would be nontrivial, in the parallel case.
  • Such an integration would be equivalent to noiseless qEHVI computation with a batch size |P| + q, which would be prohibitively expensive since computation scales exponentially with the batch size.
  • Additional wall-time performance improvements can be gained through the use of more efficient partitioning algorithms (e.g.
  • The authors hope this work encourages researchers to consider more improvements from applying modern computational paradigms and tooling to MO BO, and BO more generally
Summary
  • Introduction:

    The problem of optimizing multiple competing objectives is ubiquitous in scientific and engineering applications.
  • Evaluating the crash safety of an automobile design experimentally is expensive due to both the manufacturing time and the destruction of a vehicle.
  • In such a scenario, sample efficiency is paramount.
  • An automaker could manufacture multiple vehicle designs in parallel or a web service could deploy several control policies to different segments of traffic at the same time
  • Objectives:

    In the constrained optimization setting, the authors aim to identify the feasible Pareto set: Pfeas = {f (x) s.t. c(x) ≥ 0, x : c(x ) ≥ 0, f (x ) f (x)}
  • Methods:

    The authors empirically evaluate qEHVI on synthetic and real world optimization problems.
  • The authors compare qEHVI6 against existing state of the art methods7 including SMS-EGO8, PESMO8, and analytic EHVI [64] with gradients6.
  • The authors compare against a novel extension of ParEGO [39] to support parallel evaluation and constraints, neither of which have been done before to the knowledge; the authors call this method qPAREGO6.
  • Results:

    The authors' empirical evaluation shows that qEHVI outperforms state-of-the-art multi-objective BO algorithms using a fraction of their wall time.
  • The authors demonstrate that the method achieves performance superior to that of state-of-the-art MO BO approaches
  • Conclusion:

    Practical, and efficient algorithm for parallel, constrained MO BO.
  • Extending to noisy observations would be nontrivial, in the parallel case.
  • Such an integration would be equivalent to noiseless qEHVI computation with a batch size |P| + q, which would be prohibitively expensive since computation scales exponentially with the batch size.
  • Additional wall-time performance improvements can be gained through the use of more efficient partitioning algorithms (e.g.
  • The authors hope this work encourages researchers to consider more improvements from applying modern computational paradigms and tooling to MO BO, and BO more generally
Tables
  • Table1: Acquisition Optimization wall time in seconds on a CPU (2x Intel Xeon E5-2680 v4 @ 2.40GHz) and a GPU (Tesla V100-SXM2-16GB). We report the mean and 2 standard errors across 20 trials. NA indicates that the algorithm does not support constraints
  • Table2: Reference points for all benchmark problems. Assuming minimization. In our benchmarks, equivalently maximize the negative objectives and multiply the reference points by -1
  • Table3: Acquisition Optimization wall time in seconds on a CPU (2x Intel Xeon E5-2680 v4 @ 2.40GHz) and on a GPU (Tesla V100-SXM2-16GB). The mean and two standard errors are reported. NA indicates that the algorithm does not support constraints
Download tables as Excel
Related work
  • Yang et al [65] is the only previous work to consider exact gradients of EHVI, but the authors only derive an analytical gradient for the unconstrained M = 2, q = 1 setting. All other works either do not optimize EHVI (e.g. they use it for pre-screening candidates [17]), optimize it with gradient-free methods [64], or using approximate gradients [58]. In contrast, we use exact gradients and demonstrate that optimizing EHVI with gradients is far more efficient.

    There are many alternatives to EHVI for MO BO. For example, ParEGO [39] randomly scalarizes the objectives and uses Expected Improvement [37], and SMS-EGO [50] uses HV in a UCB-based acquisition function and is more scalable than EHVI [51]. Both methods have only been considered for the q = 1, unconstrained setting. Predictive entropy search for MO BO (PESMO) [32] has been shown to be another competitive alternative and has been extended to handle constraints [25] and parallel evaluations [26]. MO max-value entropy search (MO-MES) has been shown to achieve superior optimization performance and faster wall times than PESMO, but is limited to q = 1.
Reference
  • M. Abdolshah, A. Shilton, S. Rana, S. Gupta, and S. Venkatesh. Expected hypervolume improvement with constraints. In 2018 24th International Conference on Pattern Recognition (ICPR), pages 3238–3243, 2018.
    Google ScholarLocate open access versionFindings
  • Arash Asadpour, Hamid Nazerzadeh, and Amin Saberi. Stochastic submodular maximization. In Christos Papadimitriou and Shuzhong Zhang, editors, Internet and Network Economics. Springer Berlin Heidelberg, 2008.
    Google ScholarFindings
  • R. Astudillo and P. Frazier. Bayesian optimization of composite functions. Forthcoming, in Proceedings of the 35th International Conference on Machine Learning, 2019.
    Google ScholarLocate open access versionFindings
  • Anne Auger, Johannes Bader, Dimo Brockhoff, and Eckart Zitzler. Theory of the hypervolume indicator: Optimal mu-distributions and the choice of the reference point. In Proceedings of the Tenth ACM SIGEVO Workshop on Foundations of Genetic Algorithms, FOGA ’09, page 87–102, New York, NY, USA, 2009. Association for Computing Machinery.
    Google ScholarLocate open access versionFindings
  • Maximilian Balandat, Brian Karrer, Daniel R. Jiang, Samuel Daulton, Benjamin Letham, Andrew Gordon Wilson, and Eytan Bakshy. Botorch: Programmable bayesian optimization in pytorch. ArXiv, abs/1910.06403, 2019.
    Findings
  • Syrine Belakaria, Aryan Deshwal, and Janardhan Rao Doppa. Max-value entropy search for multi-objective bayesian optimization. In Advances in Neural Information Processing Systems 32, 2019.
    Google ScholarLocate open access versionFindings
  • Eric Bradford, Artur Schweidtmann, and Alexei Lapkin. Efficient multiobjective optimization employing gaussian processes, spectral sampling and a genetic algorithm. Journal of Global Optimization, 71, 02 2018. doi: 10.1007/s10898-018-0609-2.
    Locate open access versionFindings
  • Russel E Caflisch. Monte carlo and quasi-monte carlo methods. Acta numerica, 7:1–49, 1998.
    Google ScholarLocate open access versionFindings
  • Anirban Chaudhuri, Raphael Haftka, Peter Ifju, Kelvin Chang, Christopher Tyler, and Tony Schmitz. Experimental flapping wing optimization and uncertainty quantification using limited samples. Structural and Multidisciplinary Optimization, 51, 11 2014. doi: 10.1007/s00158-014-1184-x.
    Locate open access versionFindings
  • I. Couckuyt, D. Deschrijver, and T. Dhaene. Towards efficient multiobjective optimization: Multiobjective statistical criterions. In 2012 IEEE Congress on Evolutionary Computation, pages 1–8, 2012.
    Google ScholarLocate open access versionFindings
  • Ivo Couckuyt, Dirk Deschrijver, and Tom Dhaene. Fast calculation of multiobjective probability of improvement and expected improvement criteria for pareto optimization. J. of Global Optimization, 60(3): 575–594, November 2014.
    Google ScholarLocate open access versionFindings
  • Daniel A. da Silva. Proprietades geraes. J. de l’Ecole Polytechnique, cah.
    Google ScholarFindings
  • K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Transactions on Evolutionary Computation, 6(2):182–197, 2002.
    Google ScholarLocate open access versionFindings
  • Kalyan Deb, L. Thiele, Marco Laumanns, and Eckart Zitzler. Scalable multi-objective optimization test problems. volume 1, pages 825–830, 06 2002. ISBN 0-7803-7282-4. doi: 10.1109/CEC.2002.1007032.
    Findings
  • Kalyanmoy Deb. Constrained Multi-objective Evolutionary Algorithm, pages 85–118. Springer International Publishing, Cham, 2019.
    Google ScholarFindings
  • Kerstin Dächert, Kathrin Klamroth, Renaud Lacour, and Daniel Vanderpooten. Efficient computation of the search region in multi-objective optimization. European Journal of Operational Research, 260(3):841 – 855, 2017.
    Google ScholarLocate open access versionFindings
  • M. T. M. Emmerich, K. C. Giannakoglou, and B. Naujoks. Single- and multiobjective evolutionary optimization assisted by gaussian random field metamodels. IEEE Transactions on Evolutionary Computation, 10(4):421–439, 2006.
    Google ScholarLocate open access versionFindings
  • M. T. M. Emmerich, A. H. Deutz, and J. W. Klinkenberg. Hypervolume-based expected improvement: Monotonicity properties and exact computation. In 2011 IEEE Congress of Evolutionary Computation (CEC), pages 2147–2154, 2011.
    Google ScholarLocate open access versionFindings
  • Michael Emmerich, Kaifeng Yang, André Deutz, Hao Wang, and Carlos M. Fonseca. A Multicriteria Generalization of Bayesian Global Optimization, pages 229–242. Springer International Publishing, 2016.
    Google ScholarFindings
  • Michael T. M. Emmerich and Carlos M. Fonseca. Computing hypervolume contributions in low dimensions: Asymptotically optimal algorithm and complexity results. In Ricardo H. C. Takahashi, Kalyanmoy Deb, Elizabeth F. Wanner, and Salvatore Greco, editors, Evolutionary Multi-Criterion Optimization, pages 121–135, Berlin, Heidelberg, 2011. Springer Berlin Heidelberg.
    Google ScholarLocate open access versionFindings
  • Paul Feliot, Julien Bect, and Emmanuel Vazquez. A bayesian approach to constrained single- and multiobjective optimization. Journal of Global Optimization, 67(1-2):97–133, Apr 2016. ISSN 1573-2916. doi: 10.1007/s10898-016-0427-3. URL http://dx.doi.org/10.1007/s10898-016-0427-3.
    Locate open access versionFindings
  • M. L. Fisher, G. L. Nemhauser, and L. A. Wolsey. An analysis of approximations for maximizing submodular set functions—II, pages 73–87. Springer Berlin Heidelberg, Berlin, Heidelberg, 1978.
    Google ScholarFindings
  • Tobias Friedrich and Frank Neumann. Maximizing submodular functions under matroid constraints by multi-objective evolutionary algorithms. In Thomas Bartz-Beielstein, Jürgen Branke, Bogdan Filipic, and Jim Smith, editors, Parallel Problem Solving from Nature – PPSN XIII, pages 922–931, Cham, 2014. Springer International Publishing. ISBN 978-3-319-10762-2.
    Google ScholarLocate open access versionFindings
  • Jacob Gardner, Matt Kusner, Zhixiang, Kilian Weinberger, and John Cunningham. Bayesian optimization with inequality constraints. In Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pages 937–945, Beijing, China, 22–24 Jun 2014. PMLR.
    Google ScholarLocate open access versionFindings
  • Eduardo C. Garrido-Merchán and Daniel Hernández-Lobato. Predictive entropy search for multi-objective bayesian optimization with constraints. Neurocomputing, 361:50 – 68, 2019.
    Google ScholarLocate open access versionFindings
  • Eduardo C. Garrido-Merchán and Daniel Hernández-Lobato. Parallel predictive entropy search for multiobjective bayesian optimization with constraints, 2020.
    Google ScholarFindings
  • David Gaudrie, Rodolphe Le Riche, Victor Picheny, Benoît Enaux, and Vincent Herbert. Targeting solutions in bayesian multi-objective optimization: sequential and batch versions. Annals of Mathematics and Artificial Intelligence, 88(1-3):187–212, Aug 2019. ISSN 1573-7470. doi: 10.1007/s10472-019-09644-8. URL http://dx.doi.org/10.1007/s10472-019-09644-8.
    Locate open access versionFindings
  • Michael A. Gelbart, Jasper Snoek, and Ryan P. Adams. Bayesian optimization with unknown constraints. In Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence, UAI, 2014.
    Google ScholarLocate open access versionFindings
  • David Ginsbourger, Rodolphe Le Riche, and Laurent Carraro. Kriging Is Well-Suited to Parallelize Optimization, pages 131–162. Springer Berlin Heidelberg, Berlin, Heidelberg, 2010.
    Google ScholarFindings
  • P. Glasserman. Performance continuity and differentiability in monte carlo optimization. In 1988 Winter Simulation Conference Proceedings, pages 518–524, 1988.
    Google ScholarLocate open access versionFindings
  • Nikolaus Hansen. The CMA Evolution Strategy: A Comparing Review, volume 192, pages 75–102. 06 2007. doi: 10.1007/3-540-32494-1_4.
    Locate open access versionFindings
  • Daniel Hernández-Lobato, José Miguel Hernández-Lobato, Amar Shah, and Ryan P. Adams. Predictive entropy search for multi-objective bayesian optimization, 2015.
    Google ScholarFindings
  • Iris Hupkens, Andre Deutz, Kaifeng Yang, and Michael Emmerich. Faster exact algorithms for computing expected hypervolume improvement. In Antonio Gaspar-Cunha, Carlos Henggeler Antunes, and Carlos Coello Coello, editors, Evolutionary Multi-Criterion Optimization, pages 65–79. Springer International Publishing, 2015.
    Google ScholarLocate open access versionFindings
  • Hisao Ishibuchi, Naoya Akedo, and Yusuke Nojima. A many-objective test problem for visually examining diversity maintenance behavior in a decision space. In Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, GECCO ’11, page 649–656, New York, NY, USA, 2011. Association for Computing Machinery. ISBN 9781450305570. doi: 10.1145/2001576.2001666. URL https://doi.org/10.1145/2001576.2001666.
    Locate open access versionFindings
  • Hisao Ishibuchi, Ryo Imada, Yu Setoguchi, and Yusuke Nojima. How to specify a reference point in hypervolume calculation for fair performance comparison. Evol. Comput., 26(3):411–440, September 2018.
    Google ScholarLocate open access versionFindings
  • Donald Jones, C. Perttunen, and B. Stuckman. Lipschitzian optimisation without the lipschitz constant. Journal of Optimization Theory and Applications, 79:157–181, 01 1993. doi: 10.1007/BF00941892.
    Locate open access versionFindings
  • Donald R. Jones, Matthias Schonlau, and William J. Welch. Efficient global optimization of expensive black-box functions. Journal of Global Optimization, 13:455–492, 1998.
    Google ScholarLocate open access versionFindings
  • Diederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. arXiv e-prints, page arXiv:1312.6114, Dec 2013.
    Findings
  • J. Knowles. Parego: a hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems. IEEE Transactions on Evolutionary Computation, 10(1):50–66, 2006.
    Google ScholarLocate open access versionFindings
  • Renaud Lacour, Kathrin Klamroth, and Carlos M. Fonseca. A box decomposition algorithm to compute the hypervolume indicator. Computers & Operations Research, 79:347 – 360, 2017.
    Google ScholarLocate open access versionFindings
  • Benjamin Letham, Brian Karrer, Guilherme Ottoni, and Eytan Bakshy. Constrained bayesian optimization with noisy experiments. Bayesian Analysis, 14(2):495–519, 06 2019. doi: 10.1214/18-BA1110.
    Locate open access versionFindings
  • Xingtao Liao, Qing Li, Xujing Yang, Weigang Zhang, and Wei Li. Multiobjective optimization for crash safety design of vehicles using stepwise regression model. Structural and Multidisciplinary Optimization, 35:561–569, 06 2008. doi: 10.1007/s00158-007-0163-x.
    Locate open access versionFindings
  • Edgar Manoatl Lopez, Luis Miguel Antonio, and Carlos A. Coello Coello. A gpu-based algorithm for a faster hypervolume contribution computation. In António Gaspar-Cunha, Carlos Henggeler Antunes, and Carlos Coello Coello, editors, Evolutionary Multi-Criterion Optimization, pages 80–94. Springer International Publishing, 2015.
    Google ScholarLocate open access versionFindings
  • Hongzi Mao, Ravi Netravali, and Mohammad Alizadeh. Neural adaptive video streaming with pensieve. In Proceedings of the Conference of the ACM Special Interest Group on Data Communication, SIGCOMM ’17, page 197–210, New York, NY, USA, 2017. Association for Computing Machinery. ISBN 9781450346535. doi: 10.1145/3098822.3098843. URL https://doi.org/10.1145/3098822.3098843.
    Locate open access versionFindings
  • Hongzi Mao, Shannon Chen, Drew Dimmery, Shaun Singh, Drew Blaisdell, Yuandong Tian, Mohammad Alizadeh, and Eytan Bakshy. Real-world video adaptation with reinforcement learning. 2019.
    Google ScholarFindings
  • Hongzi Mao, Parimarjan Negi, Akshay Narayan, Hanrui Wang, Jiacheng Yang, Haonan Wang, Ryan Marcus, Ravichandra Addanki, Mehrdad Khani Shirkoohi, Songtao He, Vikram Nathan, Frank Cangialosi, Shaileshh Bojja Venkatakrishnan, Wei-Hung Weng, Shu-Wen Han, Tim Kraska, and Mohammad Alizadeh. Park: An open platform for learning-augmented computer systems. In NeurIPS, 2019.
    Google ScholarLocate open access versionFindings
  • Sébastien Marmin, Clément Chevalier, and David Ginsbourger. Differentiating the multipoint expected improvement for optimal batch design. In Panos Pardalos, Mario Pavone, Giovanni Maria Farinella, and Vincenzo Cutello, editors, Machine Learning, Optimization, and Big Data, pages 37–48, Cham, 2015. Springer International Publishing.
    Google ScholarLocate open access versionFindings
  • Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in PyTorch. 2017.
    Google ScholarFindings
  • Victor Picheny. Multiobjective optimization using gaussian process emulators via stepwise uncertainty reduction. Statistics and Computing, 25, 10 2013. doi: 10.1007/s11222-014-9477-x.
    Locate open access versionFindings
  • Wolfgang Ponweiser, Tobias Wagner, Dirk Biermann, and Markus Vincze. Multiobjective optimization on a limited budget of evaluations using model-assisted s-metric selection. In Günter Rudolph, Thomas Jansen, Nicola Beume, Simon Lucas, and Carlo Poloni, editors, Parallel Problem Solving from Nature – PPSN X, pages 784–794, Berlin, Heidelberg, 2008. Springer Berlin Heidelberg.
    Google ScholarLocate open access versionFindings
  • Alma A. M. Rahat, Richard M. Everson, and Jonathan E. Fieldsend. Alternative infill strategies for expensive multi-objective optimisation. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO ’17, page 873–880, New York, NY, USA, 2017. Association for Computing Machinery. ISBN 9781450349208.
    Google ScholarLocate open access versionFindings
  • Carl Edward Rasmussen. Gaussian Processes in Machine Learning, pages 63–71. Springer Berlin Heidelberg, Berlin, Heidelberg, 2004.
    Google ScholarLocate open access versionFindings
  • Jerry Segercrantz. Inclusion-exclusion and characteristic functions. Mathematics Magazine, 71(3):216–218, 1998. ISSN 0025570X, 19300980. URL http://www.jstor.org/stable/2691209.
    Locate open access versionFindings
  • B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. de Freitas. Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104(1):148–175, 2016.
    Google ScholarLocate open access versionFindings
  • Niranjan Srinivas, Andreas Krause, Sham Kakade, and Matthias Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML’10, page 1015–1022, Madison, WI, USA, 2010. Omnipress. ISBN 9781605589077.
    Google ScholarLocate open access versionFindings
  • J. Sylvester. Note sur la théorème de legendre. Comptes Rendus Acad. Sci., 96:463–465, 1883.
    Google ScholarLocate open access versionFindings
  • Ryoji Tanabe and Hisao Ishibuchi. An easy-to-use real-world multi-objective optimization problem suite. Applied Soft Computing, 89:106078, 2020. ISSN 1568-4946. doi: https://doi.org/10.1016/j.asoc.2020.106078.
    Locate open access versionFindings
  • Takashi Wada and Hideitsu Hino. Bayesian optimization for multi-objective optimization and multi-point search, 2019.
    Google ScholarFindings
  • Jialei Wang, Scott C. Clark, Eric Liu, and Peter I. Frazier. Parallel bayesian global optimization of expensive functions, 2016.
    Google ScholarFindings
  • J. T. Wilson, R. Moriconi, F. Hutter, and M. P. Deisenroth. The reparameterization trick for acquisition functions. ArXiv e-prints, December 2017.
    Google ScholarFindings
  • James Wilson, Frank Hutter, and Marc Deisenroth. Maximizing acquisition functions for bayesian optimization. In Advances in Neural Information Processing Systems 31, pages 9905–9916. 2018.
    Google ScholarLocate open access versionFindings
  • Jian Wu and Peter I. Frazier. The parallel knowledge gradient method for batch bayesian optimization. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, page 3134–3142, Red Hook, NY, USA, 2016. Curran Associates Inc. ISBN 9781510838819.
    Google ScholarLocate open access versionFindings
  • Kaifeng Yang, Michael Emmerich, André Deutz, and Carlos M. Fonseca. Computing 3-d expected hypervolume improvement and related integrals in asymptotically optimal time. In 9th International Conference on Evolutionary Multi-Criterion Optimization - Volume 10173, EMO 2017, page 685–700, Berlin, Heidelberg, 2017. Springer-Verlag.
    Google ScholarFindings
  • Kaifeng Yang, Michael Emmerich, André H. Deutz, and Thomas Bäck. Efficient computation of expected hypervolume improvement using box decomposition algorithms. CoRR, abs/1904.12672, 2019.
    Findings
  • Kaifeng Yang, Michael Emmerich, André Deutz, and Thomas Bäck. Multi-objective bayesian global optimization using expected hypervolume improvement gradient. Swarm and Evolutionary Computation, 44:945 – 956, 2019. ISSN 2210-6502. doi: https://doi.org/10.1016/j.swevo.2018.10.007. URL http://www.sciencedirect.com/science/article/pii/S2210650217307861.
    Locate open access versionFindings
  • Kaifeng Yang, Pramudita Palar, Michael Emmerich, Koji Shimoyama, and Thomas Bäck. A multipoint mechanism of expected hypervolume improvement for parallel multi-objective bayesian global optimization. pages 656–663, 07 2019. doi: 10.1145/3321707.3321784.
    Findings
  • Kaifeng Yang, Pramudita Satria Palar, Michael Emmerich, Koji Shimoyama, and Thomas Bäck. A multi-point mechanism of expected hypervolume improvement for parallel multi-objective bayesian global optimization. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO ’19, page 656–663, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450361118. doi: 10.1145/3321707.3321784. URL https://doi.org/10.1145/3321707.3321784.
    Locate open access versionFindings
  • R. J. Yang, N. Wang, C. H. Tho, J. P. Bobineau, and B. P. Wang. Metamodeling Development for Vehicle Frontal Impact Simulation. Journal of Mechanical Design, 127(5):1014–1020, 01 2005.
    Google ScholarLocate open access versionFindings
  • E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and V. G. da Fonseca. Performance assessment of multiobjective optimizers: an analysis and review. IEEE Transactions on Evolutionary Computation, 7(2): 117–132, 2003.
    Google ScholarLocate open access versionFindings
Full Text
Your rating :
0

 

Tags
Comments