Yashas Annadani

Semester Work
Supervisors: Octavian Eugen Ganea, Gary Becigneul and Prof. Luc Van Gool

Preventing Posterior Collapse in Variational Autoencoders using Noisy Samples

The problem of posterior collapse in variational autoencoders, a degenerate optimum condition in which the variational posterior becomes equal to the prior, is a common problem when training autoregressive models on images and text. In this work, we try to understand this problem mathematically from the perspective of factorizability of variational posterior, and provide insights into existing models as to why they might undergo posterior collapse. Using these insights, we propose a new class of variational autoencoders which theoretically does not suffer from posterior collapse. The central idea is to perform inference on data distribution as well as a noisy version of this distribution and have a single inference network to perform inference on both of them. Experiments on popular datasets like Omniglot, Yahoo and Yelp validate the fact that posterior collapse can be prevented in our proposed framework.