PPO-CMA: Proximal Policy Optimization with Covariance Matrix Adaptation

arXiv (Cornell University)(2019)

引用 11|浏览114
暂无评分
摘要
Proximal Policy Optimization (PPO) is a highly popular model-free reinforcement learning (RL) approach. However, we observe that in a continuous action space, PPO can prematurely shrink the exploration variance, which leads to slow progress and may make the algorithm prone to getting stuck in local optima. Drawing inspiration from CMA-ES, a black-box evolutionary optimisation method designed for robustness in similar situations, we propose PPO-CMA, a proximal policy optimization approach that adaptively expands and contracts the exploration variance. With only minor algorithmic changes to PPO, our algorithm considerably improves performance in Roboschool continuous control benchmarks.
更多
查看译文
关键词
Continuous Control,Reinforcement Learning,Policy Optimization,Policy Gradient,Evolution Strategies,CMA-ES,PPO
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要