谷歌浏览器插件
订阅小程序
在清言上使用

Dual Parallel Policy Iteration With Coupled Policy Improvement

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS(2024)

引用 3|浏览48
暂无评分
摘要
In this article, a novel coupled policy improvement mechanism is developed for improving policy iteration (PI) algorithms. In contrast to the common PI, the developed dual parallel policy iteration (DPPI) with coupled policy improvement mechanism consists of two parallel PIs. At each PI step, the performances of the two parallel policies are evaluated and the better one is defined as the dominant policy. Then, the dominant policy is used to guide the parallel policy improvement in a soft manner by constraining the Kullhack-Liebler (KL) divergence between the dominant policy and the policy to be updated. It is proven that the convergence of DPPI can be guaranteed under the designed coupled policy improvement mechanism. Moreover, it is clearly shown that under certain conditions, the Q-functions of the two new policies obtained in each parallel policy improvement are larger than those of all the previous dominant policies, which is conductive to accelerate the PI process and improve the policy learning efficiency to some extent. Furthermore, by combining DPPI with the twin delay deep deterministic (TD3) policy gradient, we propose a reinforcement learning (RL) algorithm: parallel TD3 (PTD3). Experimental results on continuous-action control tasks in the MuJoCo and OpenAI Gym platforms show that the proposed PTD3 outperforms the state-of-the-art RL algorithms.
更多
查看译文
关键词
Coupled policy improvement mechanism,dominant policy,dual parallel policy iteration (DPPI),reinforcement learning (RL)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要