Dataset Distillation for Core Training Set Construction.

SMA(2020)

引用 2|浏览1
暂无评分
摘要
Machine learning is a widely adopted solution to complex and non-linear problems, but it takes considerable labor and time to develop an optimal model with high reliability. The costs increase even more as the model deepens and training data grows. This paper presents a method in which, a technique known as dataset distillation, can be implemented in data selection to reduce the training time. We first train the model with distilled images, and then, predict original train data to measure training contribution as sampling weight of selection. Our method enables the fast and easy calculation of weights even in the case of redesigning a network.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要