Variational Inference using Implicit Models
In January 2017 I wrote a series of blog posts on adversarial algorithms for variational inference. I eventually turned this into a paper on arXiv. There is a slew of papers coming out on adversarial VI this ICML season, so here are a few recommendations for papers that independently discovered and discussed similar ideas. Here's a list of recommended reads:
- Adversarial Message Passing For Graphical Models (Karaletsos, 2016). In Appendix 5, Theo discusses both prior-contrastive (5.5.1) and joint-contrastive (5.5.2) formulations.
- Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks (Mescheder et al, 2017) focusses on the prior-contrastive algorithm I described in Part II but have done a lot better job with experiments and the paper in general.
- Deep and Hierarchical Implicit Models (Tran et al, 2017) discusses variational inference in hierarchical latent variable models. Interestingly, in Section 3.5 they talk about why the KL divergence uniquely satisfies sensible desiderata of scalability (SGD optimization) and locality (allows implicit models).
- possibly more, let me know...
My blog posts are available here:
- Part I (you are here): Inference of single, global variable (Bayesian logistic regression)
- Part II: Amortised Inference via the Prior-Contrastive Method (Explaining Away Demo)
- Part III: Amortised Inference via a Joint-Contrastive Method (ALI, BiGAN)
- Part IV: Using Denoisers instead of Discriminators