arXiv Analytics

Sign in

arXiv:2112.09181 [cs.LG]AbstractReferencesReviewsResources

Approximation of functions with one-bit neural networks

C. Sinan Güntürk, Weilin Li

Published 2021-12-16, updated 2023-03-16Version 2

The celebrated universal approximation theorems for neural networks roughly state that any reasonable function can be arbitrarily well-approximated by a network whose parameters are appropriately chosen real numbers. This paper examines the approximation capabilities of one-bit neural networks -- those whose nonzero parameters are $\pm a$ for some fixed $a\not=0$. One of our main theorems shows that for any $f\in C^s([0,1]^d)$ with $\|f\|_\infty<1$ and error $\varepsilon$, there is a $f_{NN}$ such that $|f(\boldsymbol{x})-f_{NN}(\boldsymbol{x})|\leq \varepsilon$ for all $\boldsymbol{x}$ away from the boundary of $[0,1]^d$, and $f_{NN}$ is either implementable by a $\{\pm 1\}$ quadratic network with $O(\varepsilon^{-2d/s})$ parameters or a $\{\pm \frac 1 2 \}$ ReLU network with $O(\varepsilon^{-2d/s}\log (1/\varepsilon))$ parameters, as $\varepsilon\to0$. We establish new approximation results for iterated multivariate Bernstein operators, error estimates for noise-shaping quantization on the Bernstein basis, and novel implementation of the Bernstein polynomials by one-bit quadratic and ReLU neural networks.

Comments: 45 pages, 7 figures, significant changes and additions
Categories: cs.LG, cs.IT, cs.NA, math.IT, math.NA
Related articles: Most relevant | Search more
arXiv:2202.03841 [cs.LG] (Published 2022-02-08)
Width is Less Important than Depth in ReLU Neural Networks
arXiv:2306.13264 [cs.LG] (Published 2023-06-23)
FedSelect: Customized Selection of Parameters for Fine-Tuning during Personalized Federated Learning
arXiv:2406.14936 [cs.LG] (Published 2024-06-21)
On the growth of the parameters of approximating ReLU neural networks