Debona: Decoupled Boundary Network Analysis for Tighter Bounds and Faster Adversarial Robustness Proofs

CoRR(2020)

Cited 0|Views1
No score
Abstract
Neural networks are commonly used in safety-critical real-world applications. Unfortunately, the predicted output is often highly sensitive to small, and possibly imperceptible, changes to the input data. Proving that either no such adversarial examples exist, or providing a concrete instance, is therefore crucial to ensure safe applications. As enumerating and testing all potential adversarial examples is computationally infeasible, verification techniques have been developed to provide mathematically sound proofs of their absence using overestimations of the network activations. We propose an improved technique for computing tight upper and lower bounds of these node values, based on increased flexibility gained by computing both bounds independently of each other. Furthermore, we gain an additional improvement by re-implementing part of the original state-of-the-art software "Neurify", leading to a faster analysis. Combined, these adaptations reduce the necessary runtime by up to 78%, and allow a successful search for networks and inputs that were previously too complex. Finally, we provide proofs for tight upper and lower bounds on max-pooling layers in convolutional networks. To ensure widespread usability, we open source our implementation "Debona", featuring both the implementation specific enhancements as well as the refined boundary computation for faster and more exact results.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined