谷歌浏览器插件
订阅小程序
在清言上使用

BEV-LGKD: A Unified LiDAR-Guided Knowledge Distillation Framework for Multi-View BEV 3D Object Detection.

IEEE Trans. Intell. Veh.(2024)

引用 0|浏览10
暂无评分
摘要
Recently, the Bird's-Eye-View (BEV) representation has gained increasing attention in multi-view 3D object detection, demonstrating promising applications in autonomous driving. Although multi-view camera-based systems can be deployed at a low cost, high-performance multi-view BEV object detectors still require significant computational resources. Knowledge Distillation (KD) is one of the most practical techniques to train smaller yet accurate models. Different from image classification tasks, BEV 3D object detection approaches are more complicated and consist of several components. Therefore, in this paper, we propose a unified framework named BEV-LGKD to transfer knowledge in a teacher-student manner However, directly applying the teacher student paradigm to BEV features fails to achieve satisfying results due to heavy background information in RGB cameras. To solve this problem, we propose to leverage the localization advantage of LiDAR points. Specifically, we transform the LiDAR points into BEV space and generate the view-dependent foreground masks for the teacher-student paradigm. It is noted that our method only uses LiDAR points to guide the KD between RGB models. As the quality of depth estimation is crucial for BEV perception, we further introduce depth distillation to our framework. We have conducted comprehensive experiments on nuScenes dataset, bringing a maximum improvement of +3.4 mAP and +7.7 NDS for the student model. The code will be released at https://github.com/NorthSummer/LGKD
更多
查看译文
关键词
knowledge distillation framework,3d,detection,bev-lgkd,lidar-guided,multi-view
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要