arXiv Analytics

Sign in

arXiv:2102.06648 [cs.LG]AbstractReferencesReviewsResources

A Critical Look At The Identifiability of Causal Effects with Deep Latent Variable Models

Severi Rissanen, Pekka Marttinen

Published 2021-02-12Version 1

Using deep latent variable models in causal inference has attracted considerable interest recently, but an essential open question is their identifiability. While they have yielded promising results and theory exists on the identifiability of some simple model formulations, we also know that causal effects cannot be identified in general with latent variables. We investigate this gap between theory and empirical results with theoretical considerations and extensive experiments under multiple synthetic and real-world data sets, using the causal effect variational autoencoder (CEVAE) as a case study. While CEVAE seems to work reliably under some simple scenarios, it does not identify the correct causal effect with a misspecified latent variable or a complex data distribution, as opposed to the original goals of the model. Our results show that the question of identifiability cannot be disregarded, and we argue that more attention should be paid to it in future work.

Comments: 8 pages for main text + 14 pages for references and supplementary. 13 Figures
Categories: cs.LG
Related articles: Most relevant | Search more
arXiv:2111.08656 [cs.LG] (Published 2021-11-16, updated 2022-09-20)
Causal Effect Variational Autoencoder with Uniform Treatment
arXiv:2009.03034 [cs.LG] (Published 2020-09-07)
Ordinal-Content VAE: Isolating Ordinal-Valued Content Factors in Deep Latent Variable Models
arXiv:2107.09370 [cs.LG] (Published 2021-07-20)
An Embedding of ReLU Networks and an Analysis of their Identifiability