Reducing Adversarially Robust Learning to Non-Robust PAC Learning

Omar Montasser
Omar Montasser

NIPS 2020, 2020.

Cited by: 0|Views31
EI
Weibo:
Agnostic Setting We focused only on robust PAC learning in the realizable setting, where we assume there is a c ∈ C with zero robust error

Abstract:

We study the problem of reducing adversarially robust learning to standard PAC learning, i.e. the complexity of learning adversarially robust predictors using access to only a black-box non-robust learner. We give a reduction that can robustly learn any hypothesis class $\mathcal{C}$ using any non-robust learner $\mathcal{A}$ for $\math...More

Code:

Data:

Full Text
Bibtex
Weibo
Introduction
  • The authors consider the problem of learning predictors that are robust to adversarial examples at test time.
  • Neural networks)—that is, whether, if there exists a predictor in C with zero robust risk w.r.t. some unknown distribution D over X ×Y, can the authors find a predictor with robust risk using m i.i.d. samples S = {(xi, that if C is PAC-learnable non-robustly, ythie)}nm i=C1isfroalmsoDa.dRveercseanrtilayl,lyMroonbtuasstsleyr et al [2019] showed learnable
  • Their result is not constructive and the robust learning algorithm given is inefficient, complex, and does not directly use a non-robust learner.
  • Many systems in practice perform standard learning but with no robustness guarantees, and it would be beneficial to provide wrapper procedures that can guarantee adversarial robustness in a black-box manner without needing to modify current learning systems internally
Highlights
  • We consider the problem of learning predictors that are robust to adversarial examples at test time
  • Main Results When studying reductions of adversarially robust learning to non-robust learning, an important aspect emerges regarding the form of access that the reduction algorithm has to the adversary U
  • How should we model access to the sets of adversarial perturbations represented by U? we explore the setting where the reduction algorithm has explicit knowledge of the adversary U
  • Agnostic Setting We focused only on robust PAC learning in the realizable setting, where we assume there is a c ∈ C with zero robust error
  • We remark that an agnostic-to-realizable reduction described in Montasser et al [2019, Theorem 6] can be used in our setting, it has runtime that is exponential in vc(A). Another attempt through the agnostic boosting frameworks [e.g. Kalai and Kanade, 2009] requires a non-robust PAC learner A with error ε that scales with |U|2, which results in a sample complexity that depends on |U|, and this is something we would like to avoid
Results
  • When studying reductions of adversarially robust learning to non-robust learning, an important aspect emerges regarding the form of access that the reduction algorithm has to the adversary U.
  • The authors first show that there is an algorithm that can learn adversarially robust predictors with black-box oracle access to a non-robust algorithm: Theorem 3.1 (Informal).
  • For any adversary U, Algorithm 1 robustly learns any target class C using any black-box non-robust PAC learner A for C, with O(log2 |U|) oracle calls to A and sample complexity independent of |U|.
  • There exists an adversary U such that for any reduction algorithm B, there exists a target class C and a PAC learner A for C such that Ω(log |U|) oracle queries to A are necessary to robustly learn C
Conclusion
  • The main contribution of this paper is in formulating the question of reducing adversarially robust learning to standard non-robust learning and providing answers in some settings.
  • The authors remark that an agnostic-to-realizable reduction described in Montasser et al [2019, Theorem 6] can be used in the setting, it has runtime that is exponential in vc(A).
  • Another attempt through the agnostic boosting frameworks [e.g.
  • What the authors consider in this paper can be viewed as a question of boosting robustness: Can the authors boost non-robust predictors to attain a robust predictor? and can the authors do this efficiently? Another natural question to consider which the authors did not study in this paper is: Can the authors boost weakly robust predictors to attain a robust predictor?
Summary
  • Introduction:

    The authors consider the problem of learning predictors that are robust to adversarial examples at test time.
  • Neural networks)—that is, whether, if there exists a predictor in C with zero robust risk w.r.t. some unknown distribution D over X ×Y, can the authors find a predictor with robust risk using m i.i.d. samples S = {(xi, that if C is PAC-learnable non-robustly, ythie)}nm i=C1isfroalmsoDa.dRveercseanrtilayl,lyMroonbtuasstsleyr et al [2019] showed learnable
  • Their result is not constructive and the robust learning algorithm given is inefficient, complex, and does not directly use a non-robust learner.
  • Many systems in practice perform standard learning but with no robustness guarantees, and it would be beneficial to provide wrapper procedures that can guarantee adversarial robustness in a black-box manner without needing to modify current learning systems internally
  • Objectives:

    The authors aim to understand when such efficient reductions are possible.
  • Results:

    When studying reductions of adversarially robust learning to non-robust learning, an important aspect emerges regarding the form of access that the reduction algorithm has to the adversary U.
  • The authors first show that there is an algorithm that can learn adversarially robust predictors with black-box oracle access to a non-robust algorithm: Theorem 3.1 (Informal).
  • For any adversary U, Algorithm 1 robustly learns any target class C using any black-box non-robust PAC learner A for C, with O(log2 |U|) oracle calls to A and sample complexity independent of |U|.
  • There exists an adversary U such that for any reduction algorithm B, there exists a target class C and a PAC learner A for C such that Ω(log |U|) oracle queries to A are necessary to robustly learn C
  • Conclusion:

    The main contribution of this paper is in formulating the question of reducing adversarially robust learning to standard non-robust learning and providing answers in some settings.
  • The authors remark that an agnostic-to-realizable reduction described in Montasser et al [2019, Theorem 6] can be used in the setting, it has runtime that is exponential in vc(A).
  • Another attempt through the agnostic boosting frameworks [e.g.
  • What the authors consider in this paper can be viewed as a question of boosting robustness: Can the authors boost non-robust predictors to attain a robust predictor? and can the authors do this efficiently? Another natural question to consider which the authors did not study in this paper is: Can the authors boost weakly robust predictors to attain a robust predictor?
Related work
  • Recent work [Mansour et al, 2015, Feige et al, 2015, 2018, Attias et al, 2019]

    can be interpreted as giving reduction algorithms for adversarially robust learning. Specifically, Feige et al [2015] gave a reduction algorithm that can robustly learn a finite hypothesis class C using black-box access to an ERM for C. Later, Attias et al [2019] improved this to handle infinite hypothesis classes C. But their complexity and the number of calls to ERM depend super-linearly on the number of possible perturbations |U| = perturbations—we completely avoid a sample csuompxp|lUex(ixty)|d, ewpheincdheinscuenodnes|iUra|b, laenfdorremduocste types of the oracle complexity to at most a poly-logarithmic dependence. Furthermore, their work assumes access specifically to an ERM procedure, which is a very specific type of learner, while we only require access to any method that PAC-learns C and whose image has bounded VC-dimension.

    A related goal was explored by Salman et al [2020]: They proposed a method to robustify pretrained predictors. Their method takes as input a black-box predictor (not a learning algorithm) and a point x, and outputs a label prediction y for x and a radius r such that the label y is robust to l2 perturbations of radius r. But this doesn’t guarantee that the predictions y are correct, nor that the radius r would be what we desire, and even if the predictor was returned by a learning algorithm and has a very small non-robust error, we do not end up with any gurantee on the robust risk of the robustified predictor. In this paper, we require black-box access to a learning algorithm (not just to a single predictor), but we output a predictor that is guaranteed to have small robust risk (if one exists in the class, see Definition 2.2). We also provide a general treatment for arbitrary adversaries U, not just lp perturbations. Finally, we note that the approach of Montasser et al [2019] can be interpreted as using black-box access to an oracle RERMC minimizing the robust empirical risk:
Funding
  • This work is partially supported by DARPA1 cooperative agreement HR00112020003
Reference
  • P. Assouad. Densité et dimension. Annales de l’Institut Fourier (Grenoble), 33(3):233–282, 1983. Idan Attias, Aryeh Kontorovich, and Yishay Mansour. Improved generalization bounds for robust learning. In Aurélien Garivier and Satyen Kale, editors, Algorithmic Learning Theory, ALT 2019, 22-24 March 2019, Chicago, Illinois, USA, volume 98 of Proceedings of Machine Learning Research, pages 162–183. PMLR, 2019. URL http://proceedings.mlr.press/v98/attias19a.html. Maria-Florina Balcan. Lecture notes-machine learning theory, January 2010. Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndic, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, pages 387– 402.
    Locate open access versionFindings
  • Springer, 2013. A. Blumer, A. Ehrenfeucht, D. Haussler, and M. Warmuth. Learnability and the VapnikChervonenkis dimension. Journal of the Association for Computing Machinery, 36(4):929–965, 1989. Daniel G Brown. How I wasted too long finding a concentration inequality for sums of geometric variables. Sebastien Bubeck, Yin Tat Lee, Eric Price, and Ilya Razenshteyn. Adversarial examples from computational constraints. In International Conference on Machine Learning, pages 831–840, 2019.
    Google ScholarLocate open access versionFindings
  • Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6572.
    Findings
  • Adam Kalai and Varun Kanade. Potential-based agnostic boosting. In Yoshua Bengio, Dale Schuurmans, John D. Lafferty, Christopher K. I. Williams, and Aron Culotta, editors, Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009. Proceedings of a meeting held 7-10 December 2009, Vancouver, British Columbia, Canada, pages 880–888. Curran Associates, Inc., 2009. URL http://papers.nips.cc/paper/3676-potential-based-agnostic-boosting.
    Locate open access versionFindings
  • Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=rJzIBfZAb.
    Locate open access versionFindings
  • Yishay Mansour, Aviad Rubinstein, and Moshe Tennenholtz. Robust probabilistic inference. In Piotr Indyk, editor, Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2015, San Diego, CA, USA, January 4-6, 2015, pages 449–460. SIAM, 2015. doi: 10.1137/1.9781611973730.31. URL https://doi.org/10.1137/1.9781611973730.31.
    Locate open access versionFindings
  • Omar Montasser, Steve Hanneke, and Nathan Srebro. Vc classes are adversarially robustly learnable, but only improperly. In Alina Beygelzimer and Daniel Hsu, editors, Proceedings of the Thirty-Second Conference on Learning Theory, volume 99 of Proceedings of Machine Learning Research, pages 2512–2530, Phoenix, USA, 25–28 Jun 2019. PMLR.
    Google ScholarLocate open access versionFindings
  • Omar Montasser, Surbhi Goel, Ilias Diakonikolas, and Nati Srebro. Efficiently learning adversarially robust halfspaces with noise. In Proceedings of Machine Learning and Systems 2020, pages 10630–10641. 2020.
    Google ScholarLocate open access versionFindings
  • S. Moran and A. Yehudayoff. Sample compression schemes for VC classes. Journal of the ACM, 63(3):21:1–21:10, 2016.
    Google ScholarLocate open access versionFindings
  • Hadi Salman, Mingjie Sun, Greg Yang, Ashish Kapoor, and J Zico Kolter. Black-box smoothing: A provable defense for pretrained classifiers. arXiv preprint arXiv:2003.01908, 2020.
    Findings
  • R. E. Schapire and Y. Freund. Boosting. Adaptive Computation and Machine Learning. MIT Press, Cambridge, MA, 2012.
    Google ScholarFindings
  • Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
    Findings
  • V. Vapnik and A. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16(2):264–280, 1971.
    Google ScholarLocate open access versionFindings
  • V. Vapnik and A. Chervonenkis. Theory of Pattern Recognition. Nauka, Moscow, 1974.
    Google ScholarFindings
  • 2. Query the ERM oracle with a dataset Lt ⊆ X × {0, 1}.
    Google ScholarFindings
Your rating :
0

 

Tags
Comments