arXiv Analytics

Sign in

arXiv:2405.14473 [cs.LG]AbstractReferencesReviewsResources

Poisson Variational Autoencoder

Hadi Vafaii, Dekel Galor, Jacob L. Yates

Published 2024-05-23Version 1

Variational autoencoders (VAE) employ Bayesian inference to interpret sensory inputs, mirroring processes that occur in primate vision across both ventral (Higgins et al., 2021) and dorsal (Vafaii et al., 2023) pathways. Despite their success, traditional VAEs rely on continuous latent variables, which deviates sharply from the discrete nature of biological neurons. Here, we developed the Poisson VAE (P-VAE), a novel architecture that combines principles of predictive coding with a VAE that encodes inputs into discrete spike counts. Combining Poisson-distributed latent variables with predictive coding introduces a metabolic cost term in the model loss function, suggesting a relationship with sparse coding which we verify empirically. Additionally, we analyze the geometry of learned representations, contrasting the P-VAE to alternative VAE models. We find that the P-VAEencodes its inputs in relatively higher dimensions, facilitating linear separability of categories in a downstream classification task with a much better (5x) sample efficiency. Our work provides an interpretable computational framework to study brain-like sensory processing and paves the way for a deeper understanding of perception as an inferential process.

Related articles: Most relevant | Search more
arXiv:2405.16225 [cs.LG] (Published 2024-05-25, updated 2024-06-06)
Local Causal Structure Learning in the Presence of Latent Variables
arXiv:2303.14430 [cs.LG] (Published 2023-03-25)
Beta-VAE has 2 Behaviors: PCA or ICA?
arXiv:2102.03129 [cs.LG] (Published 2021-02-05)
Integer Programming for Causal Structure Learning in the Presence of Latent Variables