Modern Value Based Reinforcement Learning: A Chronological Review.

IEEE Access(2022)

引用 1|浏览1
暂无评分
摘要
Investigation of value based Reinforcement Learning algorithms exhibited a resurgence into mainstream research in 2015 following demonstration of super-human performance when applied to Atari 2600 games. Since then, significant media attention and hype have accompanied this area, and the field of Artificial Intelligence generally, spread across distinct categories. This review paper is focused exclusively on the progression of value based Reinforcement Learning in the last five years. We aim to distill the incremental improvements to stability and performance in this period, highlighting the minimal changes to the base algorithm over this time. This holds true with all but the one exception of the Recurrent Experience Replay in Distributed Reinforcement Learning algorithm, representing a fundamental shift and increase in agent performance through an advanced memory representation. We suggest a new focus area for value based Reinforcement Learning research.
更多
查看译文
关键词
Artificial intelligence,reinforcement learning,Q-learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要