Improving Model-Based Balance Controllers Using Reinforcement Learning and Adaptive Sampling

2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)(2018)

引用 7|浏览44
暂无评分
摘要
Balance control to recover from a wide range of disturbances is an important skill for humanoid robots. Traditionally, researchers have often designed a balance controller by applying optimal control theory on a simplified model that abstracts the full-body dynamics. However, the resulting controller may not be able to recover from unexpected scenarios such as non-planar pushes, or fail to exploit full-body actions such as balancing with arm movements. This paper presents a learning framework for enhancing the performance of a model-based optimal controller by expanding the region of attraction (RoA). We train a control policy that generates additional control signals on top of the model-based controller using deep reinforcement learning techniques. Instead of relying on standard reinforcement learning formulations, we explicitly model the region of attraction and continuously adjust it during the training. By drawing the training disturbances at the boundary of the RoA, we can effectively expand the RoA while avoiding local minima. We test our learning framework for in-place balancing as well as balancing with stepping on a humanoid model in simulation.
更多
查看译文
关键词
control signals,adaptive sampling,model-based balance controllers,humanoid model,in-place balancing,training disturbances,standard reinforcement learning formulations,deep reinforcement learning techniques,control policy,RoA,model-based optimal controller,learning framework,full-body actions,nonplanar pushes,full-body dynamics,optimal control theory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要