arXiv Analytics

Sign in

arXiv:2406.14936 [cs.LG]AbstractReferencesReviewsResources

On the growth of the parameters of approximating ReLU neural networks

Erion Morina, Martin Holler

Published 2024-06-21Version 1

This work focuses on the analysis of fully connected feed forward ReLU neural networks as they approximate a given, smooth function. In contrast to conventionally studied universal approximation properties under increasing architectures, e.g., in terms of width or depth of the networks, we are concerned with the asymptotic growth of the parameters of approximating networks. Such results are of interest, e.g., for error analysis or consistency results for neural network training. The main result of our work is that, for a ReLU architecture with state of the art approximation error, the realizing parameters grow at most polynomially. The obtained rate with respect to a normalized network size is compared to existing results and is shown to be superior in most cases, in particular for high dimensional input.

Related articles: Most relevant | Search more
arXiv:2306.13264 [cs.LG] (Published 2023-06-23)
FedSelect: Customized Selection of Parameters for Fine-Tuning during Personalized Federated Learning
arXiv:2112.09181 [cs.LG] (Published 2021-12-16, updated 2023-03-16)
Approximation of functions with one-bit neural networks
arXiv:2307.03756 [cs.LG] (Published 2023-07-06)
FITS: Modeling Time Series with $10k$ Parameters