PLATO: Policy Learning using Adaptive Trajectory Optimization

2017 IEEE International Conference on Robotics and Automation (ICRA)(2016)

引用 161|浏览235
暂无评分
摘要
Policy search can in principle acquire complex strategies for control of robots and other autonomous systems. When the policy is trained to process raw sensory inputs, such as images and depth maps, it can acquire a strategy that combines perception and control. However, effectively processing such complex inputs requires an expressive policy class, such as a large neural network. These high-dimensional policies are difficult to train, especially when training must be done for safety-critical systems. We propose PLATO, an algorithm that trains complex control policies with supervised learning, using model-predictive control (MPC) to generate the supervision. PLATO uses an adaptive training method to modify the behavior of MPC to gradually match the learned policy, in order to generate training samples at states that are likely to be visited by the policy while avoiding highly undesirable on-policy actions. We prove that this type of adaptive MPC expert produces supervision that leads to good long-horizon performance of the resulting policy. We also empirically demonstrate that MPC can still avoid dangerous on-policy actions in unexpected situations during training. Our empirical results on a set of challenging simulated aerial vehicle tasks demonstrate that, compared to prior methods, PLATO learns faster, experiences substantially fewer catastrophic failures (crashes) during training, and often converges to a better policy.
更多
查看译文
关键词
PLATO,policy learning using adaptive trajectory optimization,policy search,robots,autonomous systems,continuous reset-free reinforcement learning algorithm,complex control policies,supervised learning,model-predictive control,adaptive training method,adaptive MPC,simulated aerial vehicle tasks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要