Improving the scalability of distributed neuroevolution using modular congruence class generated innovation numbers

Genetic and Evolutionary Computation Conference(2021)

引用 0|浏览1
暂无评分
摘要
ABSTRACTThe asynchronous master-worker model is a classic method used to distribute evolutionary algorithms, as it can allow for decoupling of population size from the number of available processors while at the same time being naturally load balanced. While easy to implement, it suffers from an unavoidable choke point: the master process, which must process all results and generate tasks for workers. This work investigates a method for improving the performance of distributed neuroevolution algorithms, which commonly use such a model, that involves offloading costly crossover and mutation operations to the worker processes. To accomplish this, a novel modular congruence class based strategy for generating unique innovation numbers was developed, which requires no additional communication overhead. Experimental results designed to stress test the master process were generated using the Evolutionary eXploration of Augmenting Memory Models (EXAMM) neuroevolution algorithm, after discovering in preliminary results that it suffered from a bottleneck preventing scalability past 432 cores in a high performance computing environment. The results show a statistically significant improvement in throughput (genome evaluations per second) and scalability past 864 cores using this offloading method. Further, this methodology is generic and could be applied to any neuroevolution algorithm which utilize NEAT-inspired innovation numbers.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要