谷歌浏览器插件
订阅小程序
在清言上使用

Deep Reinforcement Learning for Local Path Following of an Autonomous Formula SAE Vehicle

Harvey Merton, Thomas Delamore, Karl Stol,Henry Williams

arXiv (Cornell University)(2024)

引用 0|浏览6
暂无评分
摘要
With the continued introduction of driverless events to Formula:Society ofAutomotive Engineers (F:SAE) competitions around the world, teams areinvestigating all aspects of the autonomous vehicle stack. This paper presentsthe use of Deep Reinforcement Learning (DRL) and Inverse Reinforcement Learning(IRL) to map locally-observed cone positions to a desired steering angle forrace track following. Two state-of-the-art algorithms not previously tested inthis context: soft actor critic (SAC) and adversarial inverse reinforcementlearning (AIRL), are used to train models in a representative simulation. Threenovel reward functions for use by RL algorithms in an autonomous racing contextare also discussed. Tests performed in simulation and the real world suggestthat both algorithms can successfully train models for local path following.Suggestions for future work are presented to allow these models to scale to afull F:SAE vehicle.
更多
查看译文
关键词
Reinforcement Learning,Lane Detection,Deep Learning,Driver Assistance Systems,Urban Driving
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要