MA-VIED: A Multisensor Automotive Visual Inertial Event Dataset

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS(2023)

引用 0|浏览3
暂无评分
摘要
Visual Inertial Odometry (VIO) and Simultaneous Localization and Mapping (SLAM) have experienced increasing interest in both the consumer and racing automotive sectors in recent decades. With the introduction of novel neuromorphic vision sensors, it is now possible to accurately localize a vehicle even under complex environmental conditions, leading to an improved and safer driving experience. In this paper, we propose MA-VIED, a large-scale driving dataset that collects race track-like loops, maneuvers, and standard driving scenarios, all bundled in a rich sensory dataset. MA-VIED provides highly accurate IMU data, standard and event camera streams, and RTK position data from a dual GPS antenna, both of which are hardware-synchronized with all cameras and IMU data. In addition, we collect accurate wheel odometry data and other data from the vehicle's CAN bus. The dataset contains 13 sequences collected in urban, suburban, and racetrack-like environments with varying lighting conditions and driving dynamics. We provide ground-truth RTK data for algorithms evaluation and the calibration sequences for both IMU and cameras. We then present three tests to demonstrate how MA-VIED can be suitable for monocular VIO applications, using state-of-the-art VIO algorithms and an EKF-based sensor fusion solution. The experimental results show that MA-VIED can support the development and prototyping of novel automotive-oriented frame and event-based monocular VIO algorithms.
更多
查看译文
关键词
Visual inertial odometry,event vision,MA-VIED automotive dataset,sensor fusion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要