Model-Based Reinforcement Learning for Advanced Adaptive Cruise Control: A Hybrid Car Following Policy

2022 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV)(2022)

引用 4|浏览0
暂无评分
摘要
Adaptive cruise control (ACC) is one of the frontier functionality for highly automated vehicles and has been widely studied by both academia and industry. However, previous ACC approaches are reactive and rely on precise information about the current state of a single lead vehicle. With the advancement in the field of artificial intelligence, particularly in reinforcement learning, there is a big opportunity to enhance the current functionality. This paper presents an advanced ACC concept with unique environment representation and model-based reinforcement learning (MBRL) technique which enables predictive driving. By being predictive, we refer to the capability to handle multiple lead vehicles and have internal predictions about the traffic environment which avoids reactive short-term policies. Moreover, we propose a hybrid policy that combines classical car following policies with MBRL policy to avoid accidents by monitoring the internal model of the MBRL policy. Our extensive evaluation in a realistic simulation environment shows that the proposed approach is superior to the reference model-based and model-free algorithms. The MBRL agent requires only 150k samples (approximately 50 hours driving) to converge, which is x4 more sample efficient than model-free methods.
更多
查看译文
关键词
hybrid car following policy,artificial intelligence,ACC approaches,reference model-based algorithm,single lead vehicle,highly automated vehicles,frontier functionality,advanced adaptive cruise control,model-free algorithms,MBRL policy,reactive short-term policies,internal predictions,multiple lead vehicles,predictive driving,model-based reinforcement learning,unique environment representation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要