Training Data Subset Search With Ensemble Active Learning

IEEE Transactions on Intelligent Transportation Systems(2022)

引用 11|浏览2
暂无评分
摘要
Deep Neural Networks (DNNs) often rely on vast datasets for training. Given the large size of such datasets, it is conceivable that they contain specific samples that either do not contribute or negatively impact the DNN’s optimization. Modifying the training distribution to exclude such samples could provide an effective solution to improve performance and reduce training time. This paper proposes to scale up ensemble Active Learning (AL) methods to perform acquisition at a large scale (10k to 500k samples at a time). We do this with ensembles of hundreds of models, obtained at a minimal computational cost by reusing intermediate training checkpoints. This allows us to automatically and efficiently perform a training data subset search for large labeled datasets. We observe that our approach obtains favorable subsets of training data, which can be used to train more accurate DNNs than training with the entire dataset. We perform an extensive experimental study of this phenomenon on three image classification benchmarks (CIFAR-10, CIFAR-100, and ImageNet), as well as an internal object detection benchmark for prototyping perception models for autonomous driving. Unlike existing studies, our experiments on object detection are at the scale required for production-ready autonomous driving systems. We provide insights on the impact of different initialization schemes, acquisition functions, and ensemble configurations at this scale. Our results provide strong empirical evidence that optimizing the training data distribution can significantly benefit large-scale vision tasks.
更多
查看译文
关键词
Active learning,object detection,image classification,ensemble,uncertainty,autonomous driving,AutoML
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要