arXiv Analytics

Sign in

arXiv:2106.12570 [cs.LG]AbstractReferencesReviewsResources

Learning Multimodal VAEs through Mutual Supervision

Tom Joy, Yuge Shi, Philip H. S. Torr, Tom Rainforth, Sebastian M. Schmon, N. Siddharth

Published 2021-06-23Version 1

Multimodal VAEs seek to model the joint distribution over heterogeneous data (e.g.\ vision, language), whilst also capturing a shared representation across such modalities. Prior work has typically combined information from the modalities by reconciling idiosyncratic representations directly in the recognition model through explicit products, mixtures, or other such factorisations. Here we introduce a novel alternative, the MEME, that avoids such explicit combinations by repurposing semi-supervised VAEs to combine information between modalities implicitly through mutual supervision. This formulation naturally allows learning from partially-observed data where some modalities can be entirely missing -- something that most existing approaches either cannot handle, or do so to a limited extent. We demonstrate that MEME outperforms baselines on standard metrics across both partial and complete observation schemes on the MNIST-SVHN (image--image) and CUB (image--text) datasets. We also contrast the quality of the representations learnt by mutual supervision against standard approaches and observe interesting trends in its ability to capture relatedness between data.

Related articles: Most relevant | Search more
arXiv:2311.10707 [cs.LG] (Published 2023-11-17)
Multimodal Representation Learning by Alternating Unimodal Adaptation
arXiv:2306.04539 [cs.LG] (Published 2023-06-07)
Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications
arXiv:1802.05335 [cs.LG] (Published 2018-02-14)
Multimodal Generative Models for Scalable Weakly-Supervised Learning