Transferring Visual Knowledge For A Robust Road Environment Perception In Intelligent Vehicles

2017 IEEE 20TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC)(2017)

引用 28|浏览29
暂无评分
摘要
Vision-based urban scene recognition is capable of providing semantic information that will have a significant impact in intelligent transportation systems. Semantic segmentation can provide high-level information from images of urban scenes, but we have discovered that existing models trained on public datasets often do not adapt well to other environments. This work explores the transferability of Convolution Neural Networks (CNNs) features by retraining the network using a minimal dataset incorporating training data specific to the local environment. A new local dataset is manually annotated and used to train a neural network for pixel-level semantic image information. Since data annotation is time-consuming, we evaluate the transferability of CNNs and the performance of different data augmentation methods for dataset expansion. Small datasets are normally considered not sufficient for training a neural network from scratch. This paper presents an incremental fine-tunning algorithm to update the pre-trained network. The experimental results show that it is possible to successfully transfer semantic features to a different environment by incorporating a relatively small number of local images.
更多
查看译文
关键词
neural network,pre-trained network,semantic features,local images,visual knowledge,robust road environment perception,intelligent vehicles,urban scene recognition,intelligent transportation systems,semantic segmentation,high-level information,urban scenes,Convolution Neural Networks features,CNNs,local environment,pixel-level semantic image information,data annotation,dataset expansion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要