State-Dependent Maximum Entropy Reinforcement Learning for Robot Long-Horizon Task Learning

Journal of Intelligent & Robotic Systems(2024)

引用 0|浏览2
暂无评分
摘要
Task-oriented robot learning has shown significant potential with the development of Reinforcement Learning (RL) algorithms. However, the learning of long-horizon tasks for robots remains a formidable challenge due to the inherent complexity of tasks, typically comprising multiple diverse stages. Universal RL algorithms commonly encounter issues such as slow convergence or even failure to converge altogether when applied to such tasks. The reasons behind these challenges lie in the local optima trap or redundant exploration during the new stages or the junction of two continuous stages. To address these challenges, we propose a novel state-dependent maximum entropy (SDME) reinforcement learning algorithm. This algorithm effectively balances the trade-off between exploration and exploitation around three kinds of critical states arising from the unique nature of long-horizon tasks. We conducted experiments within an open-source simulation environment, focusing on two representative long-horizon tasks. The proposed SDME algorithm exhibits faster and more stable learning capabilities, requiring merely one-third of the number of learning samples necessary for baseline approaches. Furthermore, we assess the generalization ability of our method under randomly initialized conditions, and the results show that the success rate of the SDME algorithm is nearly twice that of the baselines. Our code will be available at https://github.com/Peter-zds/SDME .
更多
查看译文
关键词
Long-horizon task,Robot learning,Reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要