Towards practical hierarchical reinforcement learning for multi lane autonomous driving

user-5f1696ff4c775ed682f5929f(2018)

引用 16|浏览4
暂无评分
摘要
In this paper, we propose an approach for making hierarchical reinforcement learning practical for autonomous driving on multi-lane highway or urban structured roads. While this approach follows the conventional hierarchy of behavior decision, motion planning, and control, it introduces an intermediate layer of abstraction that specifically discretizes the state-action space for motion planning according to a given behavioral decision. This hierarchical design allows principled modular extension of motion planning, in contrast to relying on either monolithic behavior cloning or a large set of hand-written rules. We show that this design enables significantly faster learning than a flat design, when using both value-based and policy optimization methods (DQN and PPO). We also show that this design allows transferring of the trained models, without any retraining, from a simulated environment with virtually no dynamics to one with significantly more realistic dynamics. Overall, our proposed approach is a promising way to allow reinforcement learning to be applied to complex multi-lane driving in the real world. In addition, we introduce and release an open source simulator for multi-lane driving that follows the OpenAI Gym APIs and is suitable for reinforcement learning research.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要