DD-RobustBench: An Adversarial Robustness Benchmark for Dataset Distillation
arxiv(2024)
摘要
Dataset distillation is an advanced technique aimed at compressing datasets
into significantly smaller counterparts, while preserving formidable training
performance. Significant efforts have been devoted to promote evaluation
accuracy under limited compression ratio while overlooked the robustness of
distilled dataset. In this work, we introduce a comprehensive benchmark that,
to the best of our knowledge, is the most extensive to date for evaluating the
adversarial robustness of distilled datasets in a unified way. Our benchmark
significantly expands upon prior efforts by incorporating a wider range of
dataset distillation methods, including the latest advancements such as TESLA
and SRe2L, a diverse array of adversarial attack methods, and evaluations
across a broader and more extensive collection of datasets such as ImageNet-1K.
Moreover, we assessed the robustness of these distilled datasets against
representative adversarial attack algorithms like PGD and AutoAttack, while
exploring their resilience from a frequency perspective. We also discovered
that incorporating distilled data into the training batches of the original
dataset can yield to improvement of robustness.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要