arXiv Analytics

Sign in

arXiv:1906.07982 [cs.LG]AbstractReferencesReviewsResources

A unified view on differential privacy and robustness to adversarial examples

Rafael Pinot, Florian Yger, Cédric Gouy-Pailler, Jamal Atif

Published 2019-06-19Version 1

This short note highlights some links between two lines of research within the emerging topic of trustworthy machine learning: differential privacy and robustness to adversarial examples. By abstracting the definitions of both notions, we show that they build upon the same theoretical ground and hence results obtained so far in one domain can be transferred to the other. More precisely, our analysis is based on two key elements: probabilistic mappings (also called randomized algorithms in the differential privacy community), and the Renyi divergence which subsumes a large family of divergences. We first generalize the definition of robustness against adversarial examples to encompass probabilistic mappings. Then we observe that Renyi-differential privacy (a generalization of differential privacy recently proposed in~\cite{Mironov2017RenyiDP}) and our definition of robustness share several similarities. We finally discuss how can both communities benefit from this connection to transfer technical tools from one research field to the other.

Related articles: Most relevant | Search more
arXiv:1901.10861 [cs.LG] (Published 2019-01-30)
A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance
arXiv:1903.08778 [cs.LG] (Published 2019-03-20)
Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes
arXiv:2002.02196 [cs.LG] (Published 2020-02-06)
AI-GAN: Attack-Inspired Generation of Adversarial Examples