arXiv Analytics

Sign in

arXiv:2212.01544 [cs.LG]AbstractReferencesReviewsResources

Probabilistic Verification of ReLU Neural Networks via Characteristic Functions

Joshua Pilipovsky, Vignesh Sivaramakrishnan, Meeko M. K. Oishi, Panagiotis Tsiotras

Published 2022-12-03Version 1

Verifying the input-output relationships of a neural network so as to achieve some desired performance specification is a difficult, yet important, problem due to the growing ubiquity of neural nets in many engineering applications. We use ideas from probability theory in the frequency domain to provide probabilistic verification guarantees for ReLU neural networks. Specifically, we interpret a (deep) feedforward neural network as a discrete dynamical system over a finite horizon that shapes distributions of initial states, and use characteristic functions to propagate the distribution of the input data through the network. Using the inverse Fourier transform, we obtain the corresponding cumulative distribution function of the output set, which can be used to check if the network is performing as expected given any random point from the input set. The proposed approach does not require distributions to have well-defined moments or moment generating functions. We demonstrate our proposed approach on two examples, and compare its performance to related approaches.

Related articles: Most relevant | Search more
arXiv:2006.06878 [cs.LG] (Published 2020-06-11)
Optimization Theory for ReLU Neural Networks Trained with Normalization Layers
arXiv:2101.09306 [cs.LG] (Published 2021-01-22)
Partition-Based Convex Relaxations for Certifying the Robustness of ReLU Neural Networks
arXiv:2105.14835 [cs.LG] (Published 2021-05-31)
Towards Lower Bounds on the Depth of ReLU Neural Networks