YCANet: Target Detection for Complex Traffic Scenes Based on Camera-LiDAR Fusion

IEEE Sensors Journal(2024)

引用 0|浏览0
暂无评分
摘要
In traffic scenes, target detection is impacted by factors such as complex backgrounds, illumination, and mutual occlusion of moving targets, all of which tend to cause the sensors to perform poorly and have a high false detection rate. To solve these challenges, this study proposes a multi-data source target detection network integrating target tracking based on camera and LiDAR fusion (YCANet), which utilizes the improved YOLOv7 and CenterPoint to detect the visible images and point clouds separately, and uses the Aggregated Euclidean Distances (AED) as the new metric in the data correlation module for tracking the detection results of the images and point clouds, which effectively improves the correlation robustness and reduces the tracking errors. An optimal matching fusion strategy is presented to merge the detection and tracking results of two sensors for decision matching. The fusion of camera and LiDAR improves the poor detection results, while the target tracking incorporated into the detection approach reduces the false detection rate. Homemade dataset and partial ONCE dataset are used for network training and testing, and comparing with the other seven algorithms, the experimental results show that our proposed approach better meets the accuracy requirements, and the mAP reaches 83.40% while maintaining the false detection rate of 18.19%.
更多
查看译文
关键词
YCANet,Aggregated Euclidean Distances (AED),Target Detection,Target Tracking,Sensor Fusion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要