Combining Model-Based Design and Model-Free Policy Optimization to Learn Safe, Stabilizing Controllers

IFAC-PapersOnLine(2021)

引用 7|浏览22
暂无评分
摘要
This paper introduces a framework for learning a safe, stabilizing controller for a system with unknown dynamics using model-free policy optimization algorithms. Using a nominal dynamics model, the user specifies a candidate Control Lyapunov Function (CLF) around the desired operating point, and specifies the desired safe-set using a Control Barrier Function (CBF). Using penalty methods from the optimization literature, we then develop a family of policy optimization problems which attempt to minimize control effort while satisfying the pointwise constraints used to specify the CLF and CBF. We demonstrate that when the penalty terms are scaled correctly, the optimization prioritizes the maintenance of safety over stability, and stability over optimality. We discuss how standard reinforcement learning algorithms can be applied to the problem, and validate the approach through simulation. We then illustrate how the approach can be applied to a class of hybrid models commonly used in the dynamic walking literature, and use it to learn safe, stable walking behavior over a randomly spaced sequence of stepping stones.
更多
查看译文
关键词
Adaptation,learning in physical agents,Lyapunov methods,Reinforcement learning control,Control problems under conflict,uncertainties,Optimal control theory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要