Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective
arXiv (Cornell University)(2023)
摘要
We present a new dataset condensation framework termed Squeeze, Recover and
Relabel (SRe^2L) that decouples the bilevel optimization of model and
synthetic data during training, to handle varying scales of datasets, model
architectures and image resolutions for efficient dataset condensation. The
proposed method demonstrates flexibility across diverse dataset scales and
exhibits multiple advantages in terms of arbitrary resolutions of synthesized
images, low training cost and memory consumption with high-resolution
synthesis, and the ability to scale up to arbitrary evaluation network
architectures. Extensive experiments are conducted on Tiny-ImageNet and full
ImageNet-1K datasets. Under 50 IPC, our approach achieves the highest 42.5
60.8
previous state-of-the-art methods by margins of 14.5
Our approach also surpasses MTT in terms of speed by approximately 52×
(ConvNet-4) and 16× (ResNet-18) faster with less memory consumption of
11.6× and 6.4× during data synthesis. Our code and condensed
datasets of 50, 200 IPC with 4K recovery budget are available at
https://github.com/VILA-Lab/SRe2L.
更多查看译文
关键词
dataset condensation,imagenet scale,relabel,squeeze
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要