Chrome Extension
WeChat Mini Program
Use on ChatGLM

Toward Robust Robot 3-D Perception in Urban Environments: the UT Campus Object Dataset

IEEE Transactions on Robotics(2024)

Cited 0|Views2
No score
Abstract
We introduce the UT Campus Object Dataset (CODa), a mobile robot egocentric perception dataset collected on the University of Texas Austin Campus. Our dataset contains 8.5 h of multimodal sensor data from 3-D light detection and ranging (LiDAR), stereo RGB and rgb and depth (RGBD) cameras, and a 9-DoF inertial measurement unit (IMU). CODa contains 58 min of ground truth annotations containing 1.3 million 3-D bounding boxes with instance identifiers (ID) for 53 semantic classes, 5000 frames of 3-D semantic annotations for urban terrain, and pseudoground truth localization. We repeatedly traverse identical geographic regions for diverse indoor and outdoor areas, weather conditions, and times of the day. Using CODa, we empirically demonstrate that: 1) 3-D object detection performance improves in urban settings when trained using CODa compared with existing datasets, 2) sensor-specific fine-tuning increases 3-D object detection accuracy, and 3) pretraining on CODa improves cross-dataset 3-D object detection performance in urban settings compared with pretraining on AV datasets. We release benchmarks for 3-D object detection and 3-D semantic segmentation, with future plans for additional tasks. We publicly release CODa on the Texas Data Repository (Zhang et al., 2023), pretrained models, dataset development package, and interactive dataset viewer. We expect CODa to be a valuable dataset for egocentric perception and planning for navigation in urban environments.
More
Translated text
Key words
Data sets for robotic ision,object detection,performance evaluation and benchmarking,segmentation and categorization,service robots
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined