谷歌浏览器插件
订阅小程序
在清言上使用

aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving with Long-Range Perception

Tamás Matuszka,Iván Barton, Ádám Butykai, Péter Hajas,Dávid Kiss, Domonkos Kovács, Sándor Kunsági-Máté, Péter Lengyel, Gábor Németh,Levente Pető, Dezső Ribli, Dávid Szeghy, Szabolcs Vajna,Bálint Varga

CoRR(2023)

引用 0|浏览13
暂无评分
摘要
Autonomous driving is a popular research area within the computer vision research community. Since autonomous vehicles are highly safety-critical, ensuring robustness is essential for real-world deployment. While several public multimodal datasets are accessible, they mainly comprise two sensor modalities (camera, LiDAR) which are not well suited for adverse weather. In addition, they lack far-range annotations, making it harder to train neural networks that are the base of a highway assistant function of an autonomous vehicle. Therefore, we introduce a multimodal dataset for robust autonomous driving with long-range perception. The dataset consists of 176 scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view. The collected data was captured in highway, urban, and suburban areas during daytime, night, and rain and is annotated with 3D bounding boxes with consistent identifiers across frames. Furthermore, we trained unimodal and multimodal baseline models for 3D object detection. Data are available at \url{https://github.com/aimotive/aimotive_dataset}.
更多
查看译文
关键词
aimotive dataset,multimodal dataset,robust autonomous driving
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要