Enhanced Experience Replay Generation for Efficient Reinforcement Learning

Vincent Huang, Tobias Ley, Martha Vlachou-Konchylaki,Wenfeng Hu

arXiv: Artificial Intelligence(2017)

引用 23|浏览30
暂无评分
摘要
Applying deep reinforcement learning (RL) on real systems suffers from slow data sampling. We propose an enhanced generative adversarial network (EGAN) to initialize an RL agent in order to achieve faster learning. The EGAN utilizes the relation between states and actions to enhance the quality of data samples generated by a GAN. Pre-training the agent with the EGAN shows a steeper learning curve with a 20 learning, compared to no pre-training, and an improvement compared to training with GAN by about 5 and slow data sampling the EGAN could be used to speed up the early phases of the training process.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要