Learning a Unified Control Policy for Safe Falling

2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(2017)

引用 19|浏览43
暂无评分
摘要
Being able to fall safely is a necessary motor skill for humanoids performing highly dynamic tasks, such as running and jumping. We propose a new method to learn a policy that minimizes the maximal impulse during the fall. The optimization solves for both a discrete contact planning problem and a continuous optimal control problem. Once trained, the policy can compute the optimal next contacting body part (e.g. left foot, right foot, or hands), contact location and timing, and the required joint actuation. We represent the policy as a mixture of actor-critic neural network, which consists of n control policies and the corresponding value functions. Each pair of actor-critic is associated with one of the n possible contacting body parts. During execution, the policy corresponding to the highest value function will be executed while the associated body part will be the next contact with the ground. With this mixture of actor-critic architecture, the discrete contact sequence planning is solved through the selection of the best critics while the continuous control problem is solved by the optimization of actors. We show that our policy can achieve comparable, sometimes even higher, rewards than a recursive search of the action space using dynamic programming, while enjoying 50 to 400 times of speed gain during online execution.
更多
查看译文
关键词
unified control policy,safe falling,necessary motor skill,highly dynamic tasks,discrete contact planning problem,continuous optimal control problem,optimal next contacting body part,left foot,required joint actuation,actor-critic neural network,highest value function,associated body part,actor-critic architecture,discrete contact sequence planning,continuous control problem,dynamic programming
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要