Enhancing Autonomous Vehicle Training with Language Model Integration and Critical Scenario Generation
CoRR(2024)
摘要
This paper introduces CRITICAL, a novel closed-loop framework for autonomous
vehicle (AV) training and testing. CRITICAL stands out for its ability to
generate diverse scenarios, focusing on critical driving situations that target
specific learning and performance gaps identified in the Reinforcement Learning
(RL) agent. The framework achieves this by integrating real-world traffic
dynamics, driving behavior analysis, surrogate safety measures, and an optional
Large Language Model (LLM) component. It is proven that the establishment of a
closed feedback loop between the data generation pipeline and the training
process can enhance the learning rate during training, elevate overall system
performance, and augment safety resilience. Our evaluations, conducted using
the Proximal Policy Optimization (PPO) and the HighwayEnv simulation
environment, demonstrate noticeable performance improvements with the
integration of critical case generation and LLM analysis, indicating CRITICAL's
potential to improve the robustness of AV systems and streamline the generation
of critical scenarios. This ultimately serves to hasten the development of AV
agents, expand the general scope of RL training, and ameliorate validation
efforts for AV safety.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要