Focus on Hiders: Exploring Hidden Threats for Enhancing Adversarial Training
2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)(2024)
Abstract
Adversarial training is often formulated as a min-max problem, however, concentrating only on the worst adversarial examples causes alternating repetitive confusion of the model, i.e., previously defended or correctly classified samples are not defensible or accurately classifiable in subsequent adversarial training. We characterize such non-ignorable samples as "hiders", which reveal the hidden high-risk regions within the secure area obtained through adversarial training and prevent the model from finding the real worst cases. We demand the model to prevent hiders when defending against adversarial examples for improving accuracy and robustness simultaneously. By rethinking and redefining the min-max optimization problem for adversarial training, we propose a generalized adversarial training algorithm called Hider-Focused Adversarial Training (HFAT). HFAT introduces the iterative evolution optimization strategy to simplify the optimization problem and employs an auxiliary model to reveal hiders, effectively combining the optimization directions of standard adversarial training and prevention hiders. Furthermore, we introduce an adaptive weighting mechanism that facilitates the model in adaptively adjusting its focus between adversarial examples and hiders during different training periods. We demonstrate the effectiveness of our method based on extensive experiments, and ensure that HFAT can provide higher robustness and accuracy.
MoreTranslated text
Key words
Adversarial Training,Hidden Threat,Optimization Problem,Standard Training,Iterative Optimization,Iterative Scheme,Adversarial Examples,Training Problem,Auxiliary Model,Minimax Optimization,Accuracy Of Model,Deep Neural Network,Standard Model,Local Information,Training Phase,Data Augmentation,Kullback-Leibler,Training Stage,Region Figure,Training Epochs,Adversarial Attacks,Adversarial Robustness,Projected Gradient Descent,Black-box Attacks,Decision Boundary,Attack Performance,Empirical Probability Distribution,Regional Security,Attack Methods,Random Direction
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined