arXiv Analytics

Sign in

arXiv:2107.05481 [cs.LG]AbstractReferencesReviewsResources

Prequential MDL for Causal Structure Learning with Neural Networks

Jorg Bornschein, Silvia Chiappa, Alan Malek, Rosemary Nan Ke

Published 2021-07-02Version 1

Learning the structure of Bayesian networks and causal relationships from observations is a common goal in several areas of science and technology. We show that the prequential minimum description length principle (MDL) can be used to derive a practical scoring function for Bayesian networks when flexible and overparametrized neural networks are used to model the conditional probability distributions between observed variables. MDL represents an embodiment of Occam's Razor and we obtain plausible and parsimonious graph structures without relying on sparsity inducing priors or other regularizers which must be tuned. Empirically we demonstrate competitive results on synthetic and real-world data. The score often recovers the correct structure even in the presence of strongly nonlinear relationships between variables; a scenario were prior approaches struggle and usually fail. Furthermore we discuss how the the prequential score relates to recent work that infers causal structure from the speed of adaptation when the observations come from a source undergoing distributional shift.

Related articles: Most relevant | Search more
arXiv:1805.09370 [cs.LG] (Published 2018-05-23)
Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients
arXiv:1807.04225 [cs.LG] (Published 2018-07-11)
Measuring abstract reasoning in neural networks
arXiv:2006.00866 [cs.LG] (Published 2020-06-01)
You say Normalizing Flows I see Bayesian Networks