Encoding Distributional Soft Actor-Critic for Autonomous Driving in Multi-Lane Scenarios [Research Frontier] [Research Frontier]

IEEE Computational Intelligence Magazine(2024)

引用 0|浏览0
暂无评分
摘要
This paper proposes a new reinforcement learning (RL) algorithm, called encoding distributional soft actor-critic (E-DSAC), for decision-making in autonomous driving. Unlike existing RL-based decision-making methods, E-DSAC is suitable for situations where the number of surrounding vehicles is variable and eliminates the requirement for manually pre-designed sorting rules, resulting in higher policy performance and generality. Firstly, an encoding distributional policy iteration (DPI) framework is developed by embedding a permutation invariant module, which employs a feature neural network (NN) to encode the indicators of each vehicle, in the distributional RL framework. The proposed DPI framework is proven to exhibit important properties in terms of convergence and global optimality. Next, based on the developed encoding DPI framework, the E-DSAC algorithm is proposed by adding the gradient-based update rule of the feature NN to the policy evaluation process of the DSAC algorithm. Then, the multi-lane driving task and the corresponding reward function are designed to verify the effectiveness of the proposed algorithm. Results show that the policy learned by E-DSAC can realize efficient, smooth, and relatively safe autonomous driving in the designed scenario. And the final policy performance attained by E-DSAC surpasses that of DSAC by approximately threefold. Furthermore, its effectiveness has also been verified in real vehicle experiments.
更多
查看译文
关键词
Autonomous Vehicles,Research Frontiers,Neural Network,Global Optimization,Policy Evaluation,Reward Function,Policy Learning,Reinforcement Learning Algorithm,Redistributive Policies,Real Vehicle,Iterative Framework,Value Function,Low Speed,State Representation,Speed Limit,Corresponding State,Area In Fig,Reinforcement Learning Methods,Automated Vehicles,Steering Angle,Lane Change,Left Lane,Invariant Representation,Return Distribution,Proximal Policy Optimization,Shaded Area In Fig,Deep Q-network,Yaw Rate,Road Geometry
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要