Learning Cross-Modal Deep Representations for Robust Pedestrian Detection

2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2018)

引用 220|浏览96
暂无评分
摘要
This paper presents a novel method for detecting pedestrians under adverse illumination conditions. Our approach relies on a novel cross-modality learning framework and it is based on two main phases. First, given a multimodal dataset, a deep convolutional network is employed to learn a non-linear mapping, modeling the relations between RGB and thermal data. Then, the learned feature representations are transferred to a second deep network, which receives as input an RGB image and outputs the detection results. In this way, features which are both discriminative and robust to bad illumination conditions are learned. Importantly, at test time, only the second pipeline is considered and no thermal data are required. Our extensive evaluation demonstrates that the proposed approach outperforms the state-of- the-art on the challenging KAIST multispectral pedestrian dataset and it is competitive with previous methods on the popular Caltech dataset.
更多
查看译文
关键词
deep network,robust pedestrian detection,adverse illumination conditions,multimodal dataset,deep convolutional network,nonlinear mapping,RGB image,KAIST multispectral pedestrian dataset,Caltech dataset,cross-modality learning framework,cross-modal deep representation learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要