Encoding Distributional Soft Actor-Critic for Autonomous Driving in Multi-Lane Scenarios [Research Frontier] [Research Frontier]
IEEE Computational Intelligence Magazine(2024)
Abstract
This paper proposes a new reinforcement learning (RL) algorithm, called encoding distributional soft actor-critic (E-DSAC), for decision-making in autonomous driving. Unlike existing RL-based decision-making methods, E-DSAC is suitable for situations where the number of surrounding vehicles is variable and eliminates the requirement for manually pre-designed sorting rules, resulting in higher policy performance and generality. Firstly, an encoding distributional policy iteration (DPI) framework is developed by embedding a permutation invariant module, which employs a feature neural network (NN) to encode the indicators of each vehicle, in the distributional RL framework. The proposed DPI framework is proven to exhibit important properties in terms of convergence and global optimality. Next, based on the developed encoding DPI framework, the E-DSAC algorithm is proposed by adding the gradient-based update rule of the feature NN to the policy evaluation process of the DSAC algorithm. Then, the multi-lane driving task and the corresponding reward function are designed to verify the effectiveness of the proposed algorithm. Results show that the policy learned by E-DSAC can realize efficient, smooth, and relatively safe autonomous driving in the designed scenario. And the final policy performance attained by E-DSAC surpasses that of DSAC by approximately threefold. Furthermore, its effectiveness has also been verified in real vehicle experiments.
MoreTranslated text
Key words
Sensitivity,Decision making,Wheels,Artificial neural networks,Encoding,Robustness,Task analysis
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined