Practical Dataset Distillation Based on Deep Support Vectors
arxiv(2024)
摘要
Conventional dataset distillation requires significant computational
resources and assumes access to the entire dataset, an assumption impractical
as it presumes all data resides on a central server. In this paper, we focus on
dataset distillation in practical scenarios with access to only a fraction of
the entire dataset. We introduce a novel distillation method that augments the
conventional process by incorporating general model knowledge via the addition
of Deep KKT (DKKT) loss. In practical settings, our approach showed improved
performance compared to the baseline distribution matching distillation method
on the CIFAR-10 dataset. Additionally, we present experimental evidence that
Deep Support Vectors (DSVs) offer unique information to the original
distillation, and their integration results in enhanced performance.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要