arXiv Analytics

Sign in

arXiv:1807.07998 [cs.LG]AbstractReferencesReviewsResources

Convolutional Neural Networks Analyzed via Inverse Problem Theory and Sparse Representations

Cem Tarhan, Gozde Bozdagi Akar

Published 2018-07-20Version 1

Inverse problems in imaging such as denoising, deblurring, superresolution (SR) have been addressed for many decades. In recent years, convolutional neural networks (CNNs) have been widely used for many inverse problem areas. Although their indisputable success, CNNs are not mathematically validated as to how and what they learn. In this paper, we prove that during training, CNN elements solve for inverse problems which are optimum solutions stored as CNN neuron filters. We discuss the necessity of mutual coherence between CNN layer elements in order for a network to converge to the optimum solution. We prove that required mutual coherence can be provided by the usage of residual learning and skip connections. We have set rules over training sets and depth of networks for better convergence, i.e. performance.

Comments: Pre-Print IET Signal Processing Journal
Categories: cs.LG, cs.AI, stat.ML
Related articles: Most relevant | Search more
arXiv:1811.10746 [cs.LG] (Published 2018-11-26)
MATCH-Net: Dynamic Prediction in Survival Analysis using Convolutional Neural Networks
arXiv:1810.13098 [cs.LG] (Published 2018-10-31)
Low-Rank Embedding of Kernels in Convolutional Neural Networks under Random Shuffling
arXiv:1711.01634 [cs.LG] (Published 2017-11-05)
Strategies for Conceptual Change in Convolutional Neural Networks