## AI 生成解读视频

AI抽取解析论文重点内容自动生成视频

AI解析本论文相关学术脉络

## AI 精读

AI抽取本论文的概要总结

We note that the first-order smoothing technique given in this paper is only a proof-of-concept to show it is possible to better leverage local information to certify larger safety regions without changing the smoothing measure

# Higher-Order Certification For Randomized Smoothing

NIPS 2020, (2020)

EI

Randomized smoothing is a recently proposed defense against adversarial attacks that has achieved SOTA provable robustness against $\ell_2$ perturbations. A number of publications have extended the guarantees to other metrics, such as $\ell_1$ or $\ell_\infty$, by using different smoothing measures. Although the current framework has be...更多

0

• Deep neural networks (DNNs) can be highly sensitive, i.e., small imperceptible input perturbations can lead to mis-classification [1, 2].
• For gaussian-smoothed classifiers, [5] made the certified bounds worst-case-optimal in the context of certified 2 norm radii by using the Neyman-Pearson Lemma, while authors of [13] combined the certification method with adversarial training to further improve the empirical results.
• [8] proposed a general method for finding the optimal smoothing distribution given any threat model, as well as a framework for calculating the certified robustness for the smoothed classifier

• Deep neural networks (DNNs) can be highly sensitive, i.e., small imperceptible input perturbations can lead to mis-classification [1, 2]
• Motivated by the two limitations above, we focus our attention on improving the certified safety region which is agnostic of threat models
• We present, for each threat model the upper envelopes of certified accuracies attained over the range of considered σ ∈ {0.12, 0.25, 0.50, 1.00}
• We have shown that even in the black-box setting, leveraging more information about the distribution of the labels among the sampled points allows us to certify larger regions and guarantee large certified radii against multiple threat models simultaneously
• We note that the first-order smoothing technique given in this paper is only a proof-of-concept to show it is possible to better leverage local information to certify larger safety regions without changing the smoothing measure
• This work could be extended to derive and use, for any given threat model, the best local information to exploit in order to improve the certificates for that threat model

• A number of works have shown that for p norm threat models with large p, it is impossible to give a big certified radius O(d 1 p −

1 2 where d is the input dimension) while retaining a high standard accuracy.
• The authors reuse the models given by Cohen et al [5] and calculate the certified accuracy at radius R by counting the samples of the test set that are correctly classified by the smoothed classifier g with certified radii of at least R
• For both the proposed certificate and the baseline certificate [5], the authors use a failure probability of α = 0.001 and N = 200, 000 samples for CIFAR10 and N = 1, 250, 000 samples for Imagenet.

• Under the proposed general framework for calculating certified radii, it is easy to see that adding more local constraints (Hix) in Equation (2) gives a bigger value of px(z) for any x, z which makes the super-level following subsection, the authors study the set of px, equivalently properties of functions the certified in GFμ to get safety region, bigger. some motivation about

In the which information might help them achieve a larger certified safety region.
• The authors have shown that even in the black-box setting, leveraging more information about the distribution of the labels among the sampled points allows them to certify larger regions and guarantee large certified radii against multiple threat models simultaneously.
• The authors have shown this to hold theoretically and demonstrated it on CIFAR and Imagenet classifiers.
• This work could be extended to derive and use, for any given threat model, the best local information to exploit in order to improve the certificates for that threat model

• ## Introduction:

Deep neural networks (DNNs) can be highly sensitive, i.e., small imperceptible input perturbations can lead to mis-classification [1, 2].
• For gaussian-smoothed classifiers, [5] made the certified bounds worst-case-optimal in the context of certified 2 norm radii by using the Neyman-Pearson Lemma, while authors of [13] combined the certification method with adversarial training to further improve the empirical results.
• [8] proposed a general method for finding the optimal smoothing distribution given any threat model, as well as a framework for calculating the certified robustness for the smoothed classifier
• ## Results:

A number of works have shown that for p norm threat models with large p, it is impossible to give a big certified radius O(d 1 p −

1 2 where d is the input dimension) while retaining a high standard accuracy.
• The authors reuse the models given by Cohen et al [5] and calculate the certified accuracy at radius R by counting the samples of the test set that are correctly classified by the smoothed classifier g with certified radii of at least R
• For both the proposed certificate and the baseline certificate [5], the authors use a failure probability of α = 0.001 and N = 200, 000 samples for CIFAR10 and N = 1, 250, 000 samples for Imagenet.
• ## Conclusion:

Under the proposed general framework for calculating certified radii, it is easy to see that adding more local constraints (Hix) in Equation (2) gives a bigger value of px(z) for any x, z which makes the super-level following subsection, the authors study the set of px, equivalently properties of functions the certified in GFμ to get safety region, bigger. some motivation about

In the which information might help them achieve a larger certified safety region.
• The authors have shown that even in the black-box setting, leveraging more information about the distribution of the labels among the sampled points allows them to certify larger regions and guarantee large certified radii against multiple threat models simultaneously.
• The authors have shown this to hold theoretically and demonstrated it on CIFAR and Imagenet classifiers.
• This work could be extended to derive and use, for any given threat model, the best local information to exploit in order to improve the certificates for that threat model

• Table1: Estimators to calculate the different norm value of ∇g(x). (* newly designed estimators)

samples: 200000
We reuse the models given by Cohen et al [5] and calculate the certified accuracy at radius R by counting the samples of the test set that are correctly classified by the smoothed classifier g with certified radii of at least R. For both our proposed certificate and the baseline certificate [5], we use a failure probability of α = 0.001 and N = 200, 000 samples for CIFAR10 and N = 1, 250, 000 samples for Imagenet. For ∞ radius we require a lot more samples to get better results as our current estimator is too noisy

• C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” ICLR, 2014.
• I. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in ICLR, 2015.
• M. Lecuyer, V. Atlidakis, R. Geambasu, D. Hsu, and S. Jana, “Certified robustness to adversarial examples with differential privacy,” in 2019 IEEE Symposium on Security and Privacy (SP), pp. 656–672, 2019.
• Y. Li, X. Bian, and S. Lyu, “Attacking object detectors via imperceptible patches on background,” arXiv preprint arXiv:1809.05966, 2018.
• J. Cohen, E. Rosenfeld, and Z. Kolter, “Certified adversarial robustness via randomized smoothing,” in Proceedings of the 36th International Conference on Machine Learning (K. Chaudhuri and R. Salakhutdinov, eds.), vol. 97 of Proceedings of Machine Learning Research, (Long Beach, California, USA), pp. 1310–1320, PMLR, 09–15 Jun 2019.
• B. Li, C. Chen, W. Wang, and L. Carin, “Certified adversarial robustness with additive noise,” in NeurIPS, 2019.
• K. D. Dvijotham, J. Hayes, B. Balle, Z. Kolter, C. Qin, A. György, K. Xiao, S. Gowal, and P. Kohli, “A framework for robustness certification of smoothed classifiers using f-divergences.,” in ICLR, 2020.
• G. Yang, T. Duan, E. Hu, H. Salman, I. Razenshteyn, and J. Li, “Randomized smoothing of all shapes and sizes,” arXiv preprint arXiv:2002.08118, 2020.
• A. Blum, T. Dick, N. Manoj, and H. Zhang, “Random smoothing might be unable to certify l∞ robustness for high-dimensional images,” arXiv preprint arXiv:2002.03517, 2020.
• A. Kumar, A. Levine, T. Goldstein, and S. Feizi, “Curse of dimensionality on randomized smoothing for certifiable robustness,” arXiv preprint arXiv:2002.03239, 2020.
• X. Liu, M. Cheng, H. Zhang, and C.-J. Hsieh, “Towards robust neural networks via random self-ensemble,” in Proceedings of the European Conference on Computer Vision (ECCV), pp. 369–385, 2018.
• C. Xie, J. Wang, Z. Zhang, Z. Ren, and A. Yuille, “Mitigating adversarial effects through randomization,” arXiv preprint arXiv:1711.01991, 2017.
• H. Salman, J. Li, I. Razenshteyn, P. Zhang, H. Zhang, S. Bubeck, and G. Yang, “Provably robust deep learning via adversarially trained smoothed classifiers,” in Advances in Neural Information Processing Systems, pp. 11289–11300, 2019.
• G.-H. Lee, Y. Yuan, S. Chang, and T. S. Jaakkola, “Tight certificates of adversarial robustness for randomly smoothed classifiers,” in Advances in Neural Information Processing Systems, 2019.
• J. Teng, G.-H. Lee, and Y. Y. J., “ 1 adversarial robustness certificates: a randomized smoothing approach,” 2019.
• D. Zhang*, M. Ye*, C. Gong*, Z. Zhu, and Q. Liu, “Filling the soap bubbles: Efficient black-box adversarial certification with non-gaussian smoothing,” 2020.
• H. Chernoff and H. Scheffe, “A generalization of the neyman-pearson fundamental lemma,” The Annals of Mathematical Statistics, pp. 213–225, 1952.
• S. Boyd, S. P. Boyd, and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.
• J. Wendel, “Note on the gamma function,” The American Mathematical Monthly, vol. 55, no. 9, pp. 563–564, 1948.
• [17] In order to solve the optimization problems for our framework we use the Generalized Neymann Pearson Lemma [17]. Here, we give the lemma with a simplified short proof
• 1. So the series is convergent.
• 0. Otherwise if b
• 0. Thus, the set S being a level set of a convex function is also convex. So, f is the indicator function of a convex set and thus a log-concave function. Moreover, we have that μ being isotropic gaussian distribution is also log-concave. From the properties of log-concave functions we get that the convolution f μ is also log-concave and as log-concave functions are also quasi-concave, f μ is quasi-concave.
• 0. It is sufficient to show that
• 3. Under the new basis we have
• 1. Now, we can use Theorem 3 to give us the final solution.

Jeet Mohapatra

0