谷歌浏览器插件
订阅小程序
在清言上使用

SAN: Scene Anchor Networks for Joint Action-Space Prediction

Faris Janjos,Maxim Dolgov, Muhamed Kuric, Yinzhe Shen, J. Marius

2022 IEEE Intelligent Vehicles Symposium (IV)(2022)

引用 2|浏览2
暂无评分
摘要
In this work, we present a novel multi-modal trajectory prediction architecture. We decompose the uncertainty of future trajectories along higher-level scene characteristics and lower-level motion characteristics, and model multi-modality along both dimensions separately. The scene uncertainty is captured in a joint manner, where diversity of scene modes is ensured by training multiple separate anchor networks which specialize to different scene realizations. At the same time, each network outputs multiple trajectories that cover smaller deviations given a scene mode, thus capturing motion modes. In addition, we train our architectures with an outlier-robust regression loss function, which offers a trade-off between the outlier-sensitive L 2 and outlier-insensitive L 1 losses. Our scene anchor model achieves improvements over the state of the art on the INTERACTION dataset, outperforming the StarNet architecture from our previous work.
更多
查看译文
关键词
scene anchor networks,joint action-space prediction,multimodal trajectory prediction architecture,higher-level scene characteristics,lower-level motion characteristics,model multimodality,scene uncertainty,scene mode,motion modes,outlier-robust regression loss function,SAN,outlier-sensitive L2 losses,outlier-insensitive L1 losses,INTERACTION dataset,autonomous vehicles
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要