Towards Adversarially Robust Dataset Distillation by Curvature Regularization
CoRR(2024)
Abstract
Dataset distillation (DD) allows datasets to be distilled to fractions of
their original size while preserving the rich distributional information so
that models trained on the distilled datasets can achieve a comparable accuracy
while saving significant computational loads. Recent research in this area has
been focusing on improving the accuracy of models trained on distilled
datasets. In this paper, we aim to explore a new perspective of DD. We study
how to embed adversarial robustness in distilled datasets, so that models
trained on these datasets maintain the high accuracy and meanwhile acquire
better adversarial robustness. We propose a new method that achieves this goal
by incorporating curvature regularization into the distillation process with
much less computational overhead than standard adversarial training. Extensive
empirical experiments suggest that our method not only outperforms standard
adversarial training on both accuracy and robustness with less computation
overhead but is also capable of generating robust distilled datasets that can
withstand various adversarial attacks.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined