BDA: Bandit-based Transferable AutoAugment.

Shan Lu,Mingjun Zhao, Songling Yuan,Xiaoli Wang,Lei Yang,Di Niu

SDM(2023)

引用 0|浏览16
暂无评分
摘要
AutoAugment is an automatic method to design data augmentation policies for deep learning, and has achieved significant improvements on computer vision tasks. However, since early AutoAugment approaches cost thousands of GPU hours, there is a recent demand to investigate low-cost search methods that can still find effective augmentation policies. In this paper, we propose a multi-armed bandit algorithm, named Bandit Data Augment (BDA), to efficiently search for optimal and transferable data augmentation policies. We leverage Successive Halving to make the bandit model progressively focus on more promising augmentation operations during the search, leading to sparse selection of operations and more generalizable augmentation policies. We also propose a computationally efficient rewarding scheme to reduce the evaluation cost of augmentation policies. Extensive experiments demonstrate that BDA can achieve comparable or better performance than prior Auto Augment methods on a wide range of models on CIFAR-10/100 and ImageNet benchmarks. Besides, BDA is 555 times and 536 times faster than AutoAugment on CIFAR-10 and ImageNet, respectively. In addition, BDA is 16 times faster than Fast Auto Augment on ImageNet. More importantly, BDA can discover policies that are transferable across datasets and models, and achieve similar performance to policies found directly on the target dataset.KeywordsData AugmentationDeep LearningRepresentation LearningBandit
更多
查看译文
关键词
bandit-based
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要