arXiv Analytics

Sign in

arXiv:2312.16060 [cs.LG]AbstractReferencesReviewsResources

Error-free Training for Artificial Neural Network

Bo Deng

Published 2023-12-26Version 1

Conventional training methods for artificial neural network (ANN) models never achieve zero error rate systematically for large data. A new training method consists of three steps: first create an auxiliary data from conventionally trained parameters which correspond exactly to a global minimum for the loss function of the cloned data; second create a one-parameter homotopy (hybrid) of the auxiliary data and the original data; and third train the model for the hybrid data iteratively from the auxiliary data end of the homotopy parameter to the original data end while maintaining the zero-error training rate at every iteration. This continuationmethod is guaranteed to converge numerically by a theorem which converts the ANN training problem into a continuation problem for fixed points of a parameterized transformation in the training parameter space to which the Uniform Contraction Mapping Theorem from dynamical systems applies.

Comments: 10 pages, 3 figures, Matlab mfiles available for online download
Categories: cs.LG, cs.NE, math.DS
Related articles: Most relevant | Search more
arXiv:1511.03984 [cs.LG] (Published 2015-11-12)
Prediction of the Yield of Enzymatic Synthesis of Betulinic Acid Ester Using Artificial Neural Networks and Support Vector Machine
arXiv:2107.08399 [cs.LG] (Published 2021-07-18)
A method for estimating the entropy of time series using artificial neural network
arXiv:1811.03403 [cs.LG] (Published 2018-11-08)
ExGate: Externally Controlled Gating for Feature-based Attention in Artificial Neural Networks