arXiv Analytics

Sign in

arXiv:2009.03034 [cs.LG]AbstractReferencesReviewsResources

Ordinal-Content VAE: Isolating Ordinal-Valued Content Factors in Deep Latent Variable Models

Minyoung Kim, Vladimir Pavlovic

Published 2020-09-07Version 1

In deep representational learning, it is often desired to isolate a particular factor (termed {\em content}) from other factors (referred to as {\em style}). What constitutes the content is typically specified by users through explicit labels in the data, while all unlabeled/unknown factors are regarded as style. Recently, it has been shown that such content-labeled data can be effectively exploited by modifying the deep latent factor models (e.g., VAE) such that the style and content are well separated in the latent representations. However, the approach assumes that the content factor is categorical-valued (e.g., subject ID in face image data, or digit class in the MNIST dataset). In certain situations, the content is ordinal-valued, that is, the values the content factor takes are {\em ordered} rather than categorical, making content-labeled VAEs, including the latent space they infer, suboptimal. In this paper, we propose a novel extension of VAE that imposes a partially ordered set (poset) structure in the content latent space, while simultaneously making it aligned with the ordinal content values. To this end, instead of the iid Gaussian latent prior adopted in prior approaches, we introduce a conditional Gaussian spacing prior model. This model admits a tractable joint Gaussian prior, but also effectively places negligible density values on the content latent configurations that violate the poset constraint. To evaluate this model, we consider two specific ordinal structured problems: estimating a subject's age in a face image and elucidating the calorie amount in a food meal image. We demonstrate significant improvements in content-style separation over previous non-ordinal approaches.

Related articles: Most relevant | Search more
arXiv:2102.06648 [cs.LG] (Published 2021-02-12)
A Critical Look At The Identifiability of Causal Effects with Deep Latent Variable Models
arXiv:1901.04866 [cs.LG] (Published 2019-01-15)
Practical Lossless Compression with Latent Variables using Bits Back Coding
arXiv:2212.08765 [cs.LG] (Published 2022-12-17)
Latent Variable Representation for Reinforcement Learning