SRL: Scaling Distributed Reinforcement Learning to Over Ten Thousand Cores
arxiv(2023)
摘要
The ever-growing complexity of reinforcement learning (RL) tasks demands a
distributed system to efficiently generate and process a massive amount of
data. However, existing open-source libraries suffer from various limitations,
which impede their practical use in challenging scenarios where large-scale
training is necessary. In this paper, we present a novel abstraction on the
dataflows of RL training, which unifies diverse RL training applications into a
general framework. Following this abstraction, we develop a scalable,
efficient, and extensible distributed RL system called ReaLlyScalableRL, which
allows efficient and massively parallelized training and easy development of
customized algorithms. Our evaluation shows that SRL outperforms existing
academic libraries, reaching at most 21x higher training throughput in a
distributed setting. On learning performance, beyond performing and scaling
well on common RL benchmarks with different RL algorithms, SRL can reproduce
the same solution in the challenging hide-and-seek environment as reported by
OpenAI with up to 5x speedup in wall-clock time. Notably, SRL is the first in
the academic community to perform RL experiments at a large scale with over 15k
CPU cores. SRL source code is available at:
https://github.com/openpsi-project/srl .
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要