Proximal Deterministic Policy Gradient

IROS(2020)

引用 4|浏览24
暂无评分
摘要
This paper introduces two simple techniques to improve off-policy Reinforcement Learning (RL) algorithms. First, we formulate off-policy RL as a stochastic proximal point iteration. The target network plays the role of the variable of optimization and the value network computes the proximal operator. Second, we exploits the two value functions commonly employed in state-of-the-art off-policy algorithms to provide an improved action value estimate through bootstrapping with limited increase of computational resources. Further, we demonstrate significant performance improvement over state-of-the-art algorithms on standard continuous-control RL benchmarks.
更多
查看译文
关键词
state-of-the-art off-policy algorithms,improved action value estimate,computational resources,continuous-control RL benchmarks,proximal deterministic policy,simple techniques,off-policy Reinforcement,off-policy RL,stochastic proximal point iteration,value network,proximal operator,value functions
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要