Variational autoencoders (VAEs) are a popular framework for modeling complex data distributions; they can be efficiently trained via variational inference by maximizing the evidence lower bound (ELBO), at the expense of a gap to the exact (log-)marginal likelihood. While VAEs are commonly used for representation learning, it is unclear why ELBO maximization would yield useful representations, since unregularized maximum likelihood estimation cannot invert the data-generating process. Yet, VAEs often succeed at this task. We seek to elucidate this apparent paradox by studying nonlinear VAEs in the limit of near-deterministic decoders. We first prove that, in this regime, the optimal encoder approximately inverts the decoder -- a commonly use...
In the Variational Autoencoder (VAE), the variational posterior often aligns closely with the prior,...
Variational autoencoders (VAEs) are a popular class of deep generative models with many variants and...
A key advance in learning generative models is the use of amortized inference distributions that are...
Recent work in unsupervised learning has focused on efficient inference and learning in latent varia...
The central objective function of a variational autoencoder (VAE) is its variational lower bound (th...
Likelihood-based generative frameworks are receiving increasing attention in the deep learning commu...
In this work, we provide an exact likelihood alternative to the variational training of generative a...
Variational Autoencoders are one of the most commonly used generative models, particularly for image...
14 pagesInternational audienceDespite its wide use and empirical successes, the theoretical understa...
Variational autoencoders (VAEs), as an important aspect of generative models, have received a lot of...
Variational auto-encoders (VAEs) are a powerful approach to unsupervised learning. They enable scala...
We claim that a source of severe failures for Variational Auto-Encoders is the choice of the distrib...
There are two main approaches to self-supervised learning (SSL), generative SSL, which learns a prob...
We provide theoretical and empirical evidence that using tighter evidence lower bounds (ELBOs) can b...
Variational autoencoders (VAE) have recently become one of the most interesting developments in deep...
In the Variational Autoencoder (VAE), the variational posterior often aligns closely with the prior,...
Variational autoencoders (VAEs) are a popular class of deep generative models with many variants and...
A key advance in learning generative models is the use of amortized inference distributions that are...
Recent work in unsupervised learning has focused on efficient inference and learning in latent varia...
The central objective function of a variational autoencoder (VAE) is its variational lower bound (th...
Likelihood-based generative frameworks are receiving increasing attention in the deep learning commu...
In this work, we provide an exact likelihood alternative to the variational training of generative a...
Variational Autoencoders are one of the most commonly used generative models, particularly for image...
14 pagesInternational audienceDespite its wide use and empirical successes, the theoretical understa...
Variational autoencoders (VAEs), as an important aspect of generative models, have received a lot of...
Variational auto-encoders (VAEs) are a powerful approach to unsupervised learning. They enable scala...
We claim that a source of severe failures for Variational Auto-Encoders is the choice of the distrib...
There are two main approaches to self-supervised learning (SSL), generative SSL, which learns a prob...
We provide theoretical and empirical evidence that using tighter evidence lower bounds (ELBOs) can b...
Variational autoencoders (VAE) have recently become one of the most interesting developments in deep...
In the Variational Autoencoder (VAE), the variational posterior often aligns closely with the prior,...
Variational autoencoders (VAEs) are a popular class of deep generative models with many variants and...
A key advance in learning generative models is the use of amortized inference distributions that are...