V2X-Real: a Largs-Scale Dataset for Vehicle-to-Everything Cooperative Perception
CoRR(2024)
摘要
Recent advancements in Vehicle-to-Everything (V2X) technologies have enabled
autonomous vehicles to share sensing information to see through occlusions,
greatly boosting the perception capability. However, there are no real-world
datasets to facilitate the real V2X cooperative perception research – existing
datasets either only support Vehicle-to-Infrastructure cooperation or
Vehicle-to-Vehicle cooperation. In this paper, we propose a dataset that has a
mixture of multiple vehicles and smart infrastructure simultaneously to
facilitate the V2X cooperative perception development with multi-modality
sensing data. Our V2X-Real is collected using two connected automated vehicles
and two smart infrastructures, which are all equipped with multi-modal sensors
including LiDAR sensors and multi-view cameras. The whole dataset contains 33K
LiDAR frames and 171K camera data with over 1.2M annotated bounding boxes of 10
categories in very challenging urban scenarios. According to the collaboration
mode and ego perspective, we derive four types of datasets for Vehicle-Centric,
Infrastructure-Centric, Vehicle-to-Vehicle, and
Infrastructure-to-Infrastructure cooperative perception. Comprehensive
multi-class multi-agent benchmarks of SOTA cooperative perception methods are
provided. The V2X-Real dataset and benchmark codes will be released.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要