Large-scale Benchmarking of Metaphor-based Optimization Heuristics
CoRR(2024)
摘要
The number of proposed iterative optimization heuristics is growing steadily,
and with this growth, there have been many points of discussion within the
wider community. One particular criticism that is raised towards many new
algorithms is their focus on metaphors used to present the method, rather than
emphasizing their potential algorithmic contributions. Several studies into
popular metaphor-based algorithms have highlighted these problems, even
showcasing algorithms that are functionally equivalent to older existing
methods. Unfortunately, this detailed approach is not scalable to the whole set
of metaphor-based algorithms. Because of this, we investigate ways in which
benchmarking can shed light on these algorithms. To this end, we run a set of
294 algorithm implementations on the BBOB function suite. We investigate how
the choice of the budget, the performance measure, or other aspects of
experimental design impact the comparison of these algorithms. Our results
emphasize why benchmarking is a key step in expanding our understanding of the
algorithm space, and what challenges still need to be overcome to fully gauge
the potential improvements to the state-of-the-art hiding behind the metaphors.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要