Deep Optimisation - Multi-scale Evolution by Inducing and Searching in Deep Representations.

EvoApplications(2021)

引用 1|浏览8
暂无评分
摘要
The ability of evolutionary processes to innovate and scale up over long periods of time, observed in nature, remains a central mystery in evolutionary biology, and a challenge for algorithm designers to emulate and explain in evolutionary computation (EC). The Major Transitions in Evolution is a compelling theory that explains evolvability through a multi-scale process whereby individuality (and hence selection and variation) is continually revised by the formation of associations between formerly independent entities, a process still not fully explored in EC. Deep Optimisation (DO) is a new type of model-building optimization algorithm (MBOA) that exploits deep learning methods to enable multi-scale optimization. DO uses an autoencoder model to induce a multi-level representation of solutions, capturing the relationships between the lower-level units that contribute to the quality of a solution. Variation and selection are then performed within the induced representations, causing model-informed changes to multiple solution variables simultaneously. Here, we first show that DO has impressive performance compared with other leading MBOAs (and other rival methods) on multiple knapsack problems, a standard combinatorial optimization problem of general interest. Going deeper, we then carry out a detailed investigation to understand the differences between DO and other MBOAs, identifying key problem characteristics where other MBOAs are afflicted by exponential running times, and DO is not. This study serves to concretize our understanding of the Major Transitions theory, and why that leads to evolvability, and also provides a strong motivation for further investigation of deep learning methods in optimization.
更多
查看译文
关键词
Model-Building Optimisation Algorithms,Deep autoencoder,Multi-scale search,Problem structure
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要