arXiv Analytics

Sign in

arXiv:1907.05401 [cs.DS]AbstractReferencesReviewsResources

Computational Concentration of Measure: Optimal Bounds, Reductions, and More

Omid Etesami, Saeed Mahloujifar, Mohammad Mahmoody

Published 2019-07-11Version 1

Product measures of dimension $n$ are known to be concentrated in Hamming distance: for any set $S$ in the product space of probability $\epsilon$, a random point in the space, with probability $1-\delta$, has a neighbor in $S$ that is different from the original point in only $O(\sqrt{n\ln(1/(\epsilon\delta))})$ coordinates. We obtain the tight computational version of this result, showing how given a random point and access to an $S$-membership oracle, we can find such a close point in polynomial time. This resolves an open question of [Mahloujifar and Mahmoody, ALT 2019]. As corollaries, we obtain polynomial-time poisoning and (in certain settings) evasion attacks against learning algorithms when the original vulnerabilities have any cryptographically non-negligible probability. We call our algorithm MUCIO ("MUltiplicative Conditional Influence Optimizer") since proceeding through the coordinates, it decides to change each coordinate of the given point based on a multiplicative version of the influence of that coordinate, where influence is computed conditioned on previously updated coordinates. We also define a new notion of algorithmic reduction between computational concentration of measure in different metric probability spaces. As an application, we get computational concentration of measure for high-dimensional Gaussian distributions under the $\ell_1$ metric. We prove several extensions to the results above: (1) Our computational concentration result is also true when the Hamming distance is weighted. (2) We obtain an algorithmic version of concentration around mean, more specifically, McDiarmid's inequality. (3) Our result generalizes to discrete random processes, and this leads to new tampering algorithms for collective coin tossing protocols. (4) We prove exponential lower bounds on the average running time of non-adaptive query algorithms.

Related articles: Most relevant | Search more
arXiv:2306.11951 [cs.DS] (Published 2023-06-21)
On the Optimal Bounds for Noisy Computing
arXiv:2012.04090 [cs.DS] (Published 2020-12-07)
Almost Optimal Bounds for Sublinear-Time Sampling of $k$-Cliques: Sampling Cliques is Harder Than Counting
arXiv:1307.3301 [cs.DS] (Published 2013-07-12, updated 2015-03-30)
Optimal Bounds on Approximation of Submodular and XOS Functions by Juntas