State Representation Learning For Effective Deep Reinforcement Learning

2020 IEEE International Conference on Multimedia and Expo (ICME)(2020)

引用 3|浏览56
暂无评分
摘要
Recent years have witnessed the great success of deep reinforcement learning (DRL) on a variety of vision games. Although DNN has demonstrated strong power in representation learning, such capacity is under-explored in most DRL works whose focus is usually on optimization solvers. In fact, we discover that the state feature learning is the main obstacle for further improvement of DRL algorithms. To address this issue, we propose a new state representation learning scheme with our Adjacent State Consistency Loss (ASC Loss). The loss is defined based on the hypothesis that there are fewer changes between adjacent states than that of far apart ones, since scenes in videos generally evolve smoothly. In this paper, we exploit ASC loss as an assistant of RL loss in the training phase to boost the state feature learning. We conduct evaluation on Atari games and MuJoCo continuous control tasks, which demonstrates that our method is superior to OpenAI baselines.
更多
查看译文
关键词
Representation learning,reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要