arXiv Analytics

Sign in

arXiv:1804.03782 [cs.LG]AbstractReferencesReviewsResources

CoT: Cooperative Training for Generative Modeling

Sidi Lu, Lantao Yu, Weinan Zhang, Yong Yu

Published 2018-04-11Version 1

We propose Cooperative Training (CoT) for training generative models that measure a tractable density function for target data. CoT coordinately trains a generator $G$ and an auxiliary predictive mediator $M$. The training target of $M$ is to estimate a mixture density of the learned distribution $G$ and the target distribution $P$, and that of $G$ is to minimize the Jensen-Shannon divergence estimated through $M$. CoT achieves independent success without the necessity of pre-training via Maximum Likelihood Estimation or involving high-variance algorithms like REINFORCE. This low-variance algorithm is theoretically proved to be unbiased for both generative and predictive tasks. We also theoretically and empirically show the superiority of CoT over most previous algorithms, in terms of generative quality and diversity, predictive generalization ability and computational cost.

Related articles: Most relevant | Search more
arXiv:2202.02145 [cs.LG] (Published 2022-02-04)
Generative Modeling of Complex Data
arXiv:2405.13977 [cs.LG] (Published 2024-05-22)
Removing Bias from Maximum Likelihood Estimation with Model Autophagy
arXiv:2311.01660 [cs.LG] (Published 2023-11-03)
Maximum Likelihood Estimation of Flexible Survival Densities with Importance Sampling