arXiv Analytics

Sign in

arXiv:1803.06567 [cs.LG]AbstractReferencesReviewsResources

A Dual Approach to Scalable Verification of Deep Networks

Krishnamurthy Dvijotham, Robert Stanforth, Sven Gowal, Timothy Mann, Pushmeet Kohli

Published 2018-03-17Version 1

This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that the outputs of the neural network will always behave in a certain way for a given class of inputs. Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to much more general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the verification objective. Our approach is anytime, i.e. it can be stopped at any time and a valid bound on the objective can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.

Related articles: Most relevant | Search more
arXiv:2007.10099 [cs.LG] (Published 2020-07-20)
Early Stopping in Deep Networks: Double Descent and How to Eliminate it
arXiv:1807.09011 [cs.LG] (Published 2018-07-24)
Uncertainty Modelling in Deep Networks: Forecasting Short and Noisy Series
arXiv:1511.06068 [cs.LG] (Published 2015-11-19)
Reducing Overfitting in Deep Networks by Decorrelating Representations