The Feasibility of Constrained Reinforcement Learning Algorithms: A Tutorial Study
arxiv(2024)
摘要
Satisfying safety constraints is a priority concern when solving optimal
control problems (OCPs). Due to the existence of infeasibility phenomenon,
where a constraint-satisfying solution cannot be found, it is necessary to
identify a feasible region before implementing a policy. Existing feasibility
theories built for model predictive control (MPC) only consider the feasibility
of optimal policy. However, reinforcement learning (RL), as another important
control method, solves the optimal policy in an iterative manner, which comes
with a series of non-optimal intermediate policies. Feasibility analysis of
these non-optimal policies is also necessary for iteratively improving
constraint satisfaction; but that is not available under existing MPC
feasibility theories. This paper proposes a feasibility theory that applies to
both MPC and RL by filling in the missing part of feasibility analysis for an
arbitrary policy. The basis of our theory is to decouple policy solving and
implementation into two temporal domains: virtual-time domain and real-time
domain. This allows us to separately define initial and endless, state and
policy feasibility, and their corresponding feasible regions. Based on these
definitions, we analyze the containment relationships between different
feasible regions, which enables us to describe the feasible region of an
arbitrary policy. We further provide virtual-time constraint design rules along
with a practical design tool called feasibility function that helps to achieve
the maximum feasible region. We review most of existing constraint formulations
and point out that they are essentially applications of feasibility functions
in different forms. We demonstrate our feasibility theory by visualizing
different feasible regions under both MPC and RL policies in an emergency
braking control task.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要