An Abstraction-Based Framework for Neural Network Verification

Elboher Yizhak Yisrael
Elboher Yizhak Yisrael
Gottschlich Justin
Gottschlich Justin

CAV 2020, 2019.

Cited by: 3|Bibtex|Views0|Links
Keywords:
neural network verificationnetwork sizemarabou frameworkverification techniquecounterexample guided abstraction refinementMore(6+)
Weibo:
A key ingredient employed by Machine programming is the deep neural network, which has emerged as an effective means to semi-autonomously implement many complex software systems

Abstract:

Deep neural networks are increasingly being used as controllers for safety-critical systems. Because neural networks are opaque, certifying their correctness is a significant challenge. To address this issue, several approaches have recently been proposed to formally verify them. However, network size is often a bottleneck for such appr...More

Code:

Data:

0
Introduction
  • Machine programming (MP), the automatic generation of software, is showing early signs of fundamentally transforming the way software is developed [10].
  • DNNs are appealing in that they are easier to create than handcrafted software, while still achieving excellent results.
  • Their usage raises a challenge when it comes to certification.
  • In a feedforward neural network, the neurons in the first layer receive input data that sets their initial values.
  • The output layer provides the resulting value of the DNN for a given input
Highlights
  • Machine programming (MP), the automatic generation of software, is showing early signs of fundamentally transforming the way software is developed [10]
  • A key ingredient employed by Machine programming is the deep neural network (DNN), which has emerged as an effective means to semi-autonomously implement many complex software systems
  • Deep neural network are artifacts produced by machine learning: a user provides examples of how a system should behave, and a machine learning algorithm generalizes these examples into a deep neural network capable of correctly handling inputs that it had not seen before
  • Because many practices for improving the reliability of hand-crafted code have yet to be successfully applied to deep neural network, it remains unclear how to overcome the opacity of deep neural network, which may limit our ability to certify them before they are deployed
  • Our contributions are: (i) we propose a general framework for over-approximating and refining deep neural network; we propose several heuristics for abstraction and refinement, to be used within our general framework; and we provide an implementation of our technique that integrates with the Marabou verification tool and use it for evaluation
  • The abstraction-based approach we have proposed here can provide a significant reduction in network size, boosting the performance of existing verification technology
Conclusion
  • With deep neural networks becoming widespread and with their forthcoming integration into safety-critical systems, there is an urgent need for scalable techniques to verify and reason about them.
  • Abstraction-based techniques can mitigate this difficulty, by replacing networks with smaller versions thereof to be verified, without compromising the soundness of the verification procedure.
  • The authors will investigate abstraction schemes for networks that use additional activation functions, beyond ReLUs. the authors plan to make the abstraction scheme parallelizable, allowing users to use multiple worker nodes to explore different combinations of abstraction and refinement steps, hopefully leading to faster convergence
Summary
  • Introduction:

    Machine programming (MP), the automatic generation of software, is showing early signs of fundamentally transforming the way software is developed [10].
  • DNNs are appealing in that they are easier to create than handcrafted software, while still achieving excellent results.
  • Their usage raises a challenge when it comes to certification.
  • In a feedforward neural network, the neurons in the first layer receive input data that sets their initial values.
  • The output layer provides the resulting value of the DNN for a given input
  • Conclusion:

    With deep neural networks becoming widespread and with their forthcoming integration into safety-critical systems, there is an urgent need for scalable techniques to verify and reason about them.
  • Abstraction-based techniques can mitigate this difficulty, by replacing networks with smaller versions thereof to be verified, without compromising the soundness of the verification procedure.
  • The authors will investigate abstraction schemes for networks that use additional activation functions, beyond ReLUs. the authors plan to make the abstraction scheme parallelizable, allowing users to use multiple worker nodes to explore different combinations of abstraction and refinement steps, hopefully leading to faster convergence
Related work
  • In recent years, multiple schemes have been proposed for the verification of neural networks. Besides Marabou [16,17], these include Marabou’s predecessor Reluplex [14,19], the Planet solver [6], the BaB solver [2], the Sherlock solver [5], the ReluVal solver [30], and others (e.g., [21,24,29,31]). Other approaches use sound but incomplete strategies, such as input space discretization [12] or abstract interpretation [7]. Our approach can be integrated with any sound and complete solver as its engine; incomplete approaches could also be used and might afford better performance, but could result in non-termination.

    Some existing DNN verification techniques incorporate abstraction elements. In [25], the authors use abstraction to over-approximate the Sigmoid activation function with a collection of rectangles. If the abstract verification query they produce is UNSAT, then so is the original. When a spurious counterexample is found, an arbitrary refinement step is performed. The authors report limited scalability, tackling only networks with a few dozen neurons. Abstraction techniques also appear in the AI2 approach [7], but there it is the input property and reachable regions that are over-approximated, as opposed to the DNN itself.
Funding
  • This project was partially supported by grants from the Binational Science Foundation (2017662) and the Israel Science Foundation (683/18)
Reference
  • M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba. End to End Learning for Self-Driving Cars, 2016. Technical Report. http://arxiv.org/abs/1604.07316.
    Findings
  • R. Bunel, I. Turkaslan, P. Torr, P. Kohli, and M. Kumar. Piecewise Linear Neural Network Verification: A Comparative Study, 2017. Technical Report. https://arxiv.org/abs/1711.00455v1.
    Findings
  • N. Carlini, G. Katz, C. Barrett, and D. Dill. Provably Minimally-Distorted Adversarial Examples, 2017. Technical Report. https://arxiv.org/abs/1709.10207.
    Findings
  • E. Clarke, O. Grumberg, S. Jha, Y. Lu, and H. Veith. Counterexample-Guided Abstraction Refinement. In Proc. 12th Int. Conf. on Computer Aided Verification (CAV), pages 154–169, 2010.
    Google ScholarLocate open access versionFindings
  • S. Dutta, S. Jha, S. Sanakaranarayanan, and A. Tiwari. Output Range Analysis for Deep Neural Networks. In Proc. 10th NASA Formal Methods Symposium (NFM), pages 121–138, 2018.
    Google ScholarLocate open access versionFindings
  • R. Ehlers. Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks. In Proc. 15th Int. Symp. on Automated Technology for Verification and Analysis (ATVA), pages 269–286, 2017.
    Google ScholarLocate open access versionFindings
  • T. Gehr, M. Mirman, D. Drachsler-Cohen, E. Tsankov, S. Chaudhuri, and M. Vechev. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation. In Proc. 39th IEEE Symposium on Security and Privacy (S&P), 2018.
    Google ScholarLocate open access versionFindings
  • I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. MIT Press, 2016.
    Google ScholarFindings
  • D. Gopinath, G. Katz, C. Pasareanu, and C. Barrett. DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks. In Proc. 16th. Int. Symp. on on Automated Technology for Verification and Analysis (ATVA), pages 3–19, 2018.
    Google ScholarLocate open access versionFindings
  • J. Gottschlich, A. Solar-Lezama, N. Tatbul, M. Carbin, M. Rinard, R. Barzilay, S. Amarasinghe, J. Tenenbaum, and T. Mattson. The Three Pillars of Machine Programming. In Proc. 2nd ACM SIGPLAN Int. Workshop on Machine Learning and Programming Languages (MALP), pages 69–80, 2018.
    Google ScholarLocate open access versionFindings
  • G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, and B. Kingsbury. Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups. IEEE Signal Processing Magazine, 29(6):82–97, 2012.
    Google ScholarLocate open access versionFindings
  • X. Huang, M. Kwiatkowska, S. Wang, and M. Wu. Safety Verification of Deep Neural Networks. In Proc. 29th Int. Conf. on Computer Aided Verification (CAV), pages 3–29, 2017.
    Google ScholarLocate open access versionFindings
  • K. Julian, J. Lopez, J. Brush, M. Owen, and M. Kochenderfer. Policy Compression for Aircraft Collision Avoidance Systems. In Proc. 35th Digital Avionics Systems Conf. (DASC), pages 1–10, 2016.
    Google ScholarLocate open access versionFindings
  • G. Katz, C. Barrett, D. Dill, K. Julian, and M. Kochenderfer. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks. In Proc. 29th Int. Conf. on Computer Aided Verification (CAV), pages 97–117, 2017.
    Google ScholarLocate open access versionFindings
  • G. Katz, C. Barrett, D. Dill, K. Julian, and M. Kochenderfer. Towards Proving the Adversarial Robustness of Deep Neural Networks. In Proc. 1st Workshop on Formal Verification of Autonomous Vehicles (FVAV), pages 19–26, 2017.
    Google ScholarLocate open access versionFindings
  • G. Katz, D. Huang, D. Ibeling, K. Julian, C. Lazarus, R. Lim, P. Shah, S. Thakoor, H. Wu, A. Zeljic, D. Dill, M. Kochenderfer, and C. Barrett. The Marabou Framework for Verification and Analysis of Deep Neural Networks. In Proc. 31st Int. Conf. on Computer Aided Verification (CAV), 2019. To appear.
    Google ScholarLocate open access versionFindings
  • Y. Kazak, C. Barrett, G. Katz, and M. Schapira. Verifying Deep-RL-Driven Systems. In Proc. 1st ACM SIGCOMM Workshop on Network Meets AI & ML (NetAI), 2019.
    Google ScholarLocate open access versionFindings
  • A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems, pages 1097–1105, 2012.
    Google ScholarLocate open access versionFindings
  • L. Kuper, G. Katz, J. Gottschlich, K. Julian, C. Barrett, and M. Kochenderfer. Toward Scalable Verification for Safety-Critical Deep Networks, 2018. Technical Report. https://arxiv.org/abs/1801.05950.
    Findings
  • A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial Examples in the Physical World, 2016. Technical Report. http://arxiv.org/abs/1607.02533.
    Findings
  • A. Lomuscio and L. Maganti. An Approach to Reachability Analysis for FeedForward ReLU Neural Networks, 2017. Technical Report. https://arxiv.org/abs/1706.07351.
    Findings
  • H. Mao, R. Netravali, and M. Alizadeh. Neural Adaptive Video Streaming with Pensieve. In Proc. Conf. of the ACM Special Interest Group on Data Communication (SIGCOMM), pages 197–210, 2017.
    Google ScholarLocate open access versionFindings
  • V. Nair and G. Hinton. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proc. 27th Int. Conf. on Machine Learning (ICML), pages 807–814, 2010.
    Google ScholarLocate open access versionFindings
  • N. Narodytska, S. Kasiviswanathan, L. Ryzhyk, M. Sagiv, and T. Walsh. Verifying Properties of Binarized Deep Neural Networks, 2017. Technical Report. http://arxiv.org/abs/1709.06662.
    Findings
  • L. Pulina and A. Tacchella. An Abstraction-Refinement Approach to Verification of Artificial Neural Networks. In Proc. 22nd Int. Conf. on Computer Aided Verification (CAV), pages 243–257, 2010.
    Google ScholarLocate open access versionFindings
  • W. Ruan, X. Huang, and M. Kwiatkowska. Reachability Analysis of Deep Neural Networks with Provable Guarantees. In Proc. 27th Int. Joing Conf. on Artificial Intelligence (IJACI), pages 2651–2659, 2018.
    Google ScholarLocate open access versionFindings
  • D. Silver, A. Huang, C. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, and S. Dieleman. Mastering the Game of Go with Deep Neural Networks and Tree Search. Nature, 529(7587):484–489, 2016.
    Google ScholarLocate open access versionFindings
  • C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing Properties of Neural Networks, 2013. Technical Report. http://arxiv.org/abs/1312.6199.
    Findings
  • V. Tjeng, K. Xiao, and R. Tedrake. Evaluating Robustness of Neural Networks with Mixed Integer Programming. In Proc. 7th Int. Conf. on Learning Representations (ICLR), 2019.
    Google ScholarLocate open access versionFindings
  • S. Wang, K. Pei, J. Whitehouse, J. Yang, and S. Jana. Formal Security Analysis of Neural Networks using Symbolic Intervals. In Proc. 27th USENIX Security Symposium, 2018.
    Google ScholarLocate open access versionFindings
  • W. Xiang, H.-D. Tran, and T. Johnson. Output Reachable Set Estimation and Verification for Multilayer Neural Networks. IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 99:1–7, 2018.
    Google ScholarLocate open access versionFindings
Your rating :
0

 

Tags
Comments