arXiv Analytics

Sign in

arXiv:1908.05164 [cs.LG]AbstractReferencesReviewsResources

Unconstrained Monotonic Neural Networks

Antoine Wehenkel, Gilles Louppe

Published 2019-08-14Version 1

Monotonic neural networks have recently been proposed as a way to define invertible transformations. These transformations can be combined into powerful autoregressive flows that have been shown to be universal approximators of continuous probability distributions. Architectures that ensure monotonicity typically enforce constraints on weights and activation functions, which enables invertibility but leads to a cap on the expressiveness of the resulting transformations. In this work, we propose the Unconstrained Monotonic Neural Network (UMNN) architecture based on the insight that a function is monotonic as long as its derivative is strictly positive. In particular, this latter condition can be enforced with a free-form neural network whose only constraint is the positiveness of its output. We evaluate our new invertible building block within a new autoregressive flow (UMNN-MAF) and demonstrate its effectiveness on density estimation experiments. We also illustrate the ability of UMNNs to improve variational inference.

Related articles: Most relevant | Search more
arXiv:2106.03228 [cs.LG] (Published 2021-06-06)
Distributional Reinforcement Learning with Unconstrained Monotonic Neural Networks
arXiv:1810.04570 [cs.LG] (Published 2018-10-09)
Building a Reproducible Machine Learning Pipeline
arXiv:2311.09245 [cs.LG] (Published 2023-11-13)
Affine Invariance in Continuous-Domain Convolutional Neural Networks