arXiv Analytics

Sign in

arXiv:1904.01399 [cs.LG]AbstractReferencesReviewsResources

On Geometric Structure of Activation Spaces in Neural Networks

Yuting Jia, Haiwen Wang, Shuo Shao, Huan Long, Yunsong Zhou, Xinbing Wang

Published 2019-04-02Version 1

In this paper, we investigate the geometric structure of activation spaces of fully connected layers in neural networks and then show applications of this study. We propose an efficient approximation algorithm to characterize the convex hull of massive points in high dimensional space. Based on this new algorithm, four common geometric properties shared by the activation spaces are concluded, which gives a rather clear description of the activation spaces. We then propose an alternative classification method grounding on the geometric structure description, which works better than neural networks alone. Surprisingly, this data classification method can be an indicator of overfitting in neural networks. We believe our work reveals several critical intrinsic properties of modern neural networks and further gives a new metric for evaluating them.

Related articles: Most relevant | Search more
arXiv:1805.07405 [cs.LG] (Published 2018-05-18)
Processing of missing data by neural networks
arXiv:1805.09370 [cs.LG] (Published 2018-05-23)
Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients
arXiv:1811.12273 [cs.LG] (Published 2018-11-29)
On the Transferability of Representations in Neural Networks Between Datasets and Tasks