DARLEI: Deep Accelerated Reinforcement Learning with Evolutionary Intelligence
CoRR(2023)
摘要
We present DARLEI, a framework that combines evolutionary algorithms with
parallelized reinforcement learning for efficiently training and evolving
populations of UNIMAL agents. Our approach utilizes Proximal Policy
Optimization (PPO) for individual agent learning and pairs it with a tournament
selection-based generational learning mechanism to foster morphological
evolution. By building on Nvidia's Isaac Gym, DARLEI leverages GPU accelerated
simulation to achieve over 20x speedup using just a single workstation,
compared to previous work which required large distributed CPU clusters. We
systematically characterize DARLEI's performance under various conditions,
revealing factors impacting diversity of evolved morphologies. For example, by
enabling inter-agent collisions within the simulator, we find that we can
simulate some multi-agent interactions between the same morphology, and see how
it influences individual agent capabilities and long-term evolutionary
adaptation. While current results demonstrate limited diversity across
generations, we hope to extend DARLEI in future work to include interactions
between diverse morphologies in richer environments, and create a platform that
allows for coevolving populations and investigating emergent behaviours in
them. Our source code is also made publicly at
https://saeejithnair.github.io/darlei.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要