A Risk-Averse Preview-based $Q$-Learning Algorithm: Application to Highway Driving of Autonomous Vehicles

IEEE Transactions on Control Systems Technology(2022)

引用 0|浏览6
暂无评分
摘要
A risk-averse preview-based $Q$-learning planner is presented for navigation of autonomous vehicles. To this end, the multi-lane road ahead of a vehicle is represented by a finite-state non-stationary Markov decision process (MDP). A risk assessment unit module is then presented that leverages the preview information provided by sensors along with a stochastic reachability module to assign reward values to the MDP states and update them as scenarios develop. A sampling-based risk-averse preview-based Q-learning algorithm is finally developed that generates samples using the preview information and reward function to learn risk-averse optimal planning strategies without actual interaction with the environment. The risk factor is imposed on the objective function to avoid fluctuation of the Q values, which can jeopardize the vehicle's safety and/or performance. The overall hybrid automaton model of the system is leveraged to develop a feasibility check unit module that detects unfeasible plans and enables the planner system to react proactively to the changes of the environment. Finally, to verify the efficiency of the presented algorithm, its implementation on two highway driving scenarios of an autonomous vehicle in a varying traffic density is considered.
更多
查看译文
关键词
Autonomous vehicle (AV),log-expected-exponential bellman inequality,risk-averse Q-learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要