We extend PAC-Bayesian theory to generative models and develop generalization bounds for models based on the Wasserstein distance and the total variation distance. Our first result on the Wasserstein distance assumes the instance space is bounded, while our second result takes advantage of dimensionality reduction. Our results naturally apply to Wasserstein GANs and Energy-Based GANs, and our bounds provide new training objectives for these two. Although our work is mainly theoretical, we perform numerical experiments showing non-vacuous generalization bounds for Wasserstein GANs on synthetic datasets.Comment: Published at ICML 202
Generative adversarial networks(GAN) are popular Deep learning models that can implicitly learn rich...
Generative adversarial networks (GANs) are a class of generative models, for which the goal is to le...
PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability o...
We study how well generative adversarial networks (GAN) learn probability distributions from finite ...
In this paper, we study Wasserstein Generative Adversarial Networks (WGAN) using GroupSort neural ne...
Generative Adversarial Networks (GANs) have achieved a great success in unsupervised learning. Despi...
This paper studies how well generative adversarial networks (GANs) learn probability distributions f...
Generative adversarial nets (GANs) are very successful at modeling distributions from given samples,...
In this work, we construct generalization bounds to understand existing learning algorithms and prop...
Generative Adversarial Networks (GANs) were proposed in 2014 as a new method efficiently producing r...
We investigate the training and performance of generative adversarial networks using the Maximum Mea...
Statistical divergences play an important role in many data-driven applications. Two notable example...
Deep generative models allow us to learn hidden representations of data and generate new examples. T...
International audienceMinimising upper bounds on the population risk or the generalisation gap has b...
While there has been progress in developing non-vacuous generalization bounds for deep neural networ...
Generative adversarial networks(GAN) are popular Deep learning models that can implicitly learn rich...
Generative adversarial networks (GANs) are a class of generative models, for which the goal is to le...
PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability o...
We study how well generative adversarial networks (GAN) learn probability distributions from finite ...
In this paper, we study Wasserstein Generative Adversarial Networks (WGAN) using GroupSort neural ne...
Generative Adversarial Networks (GANs) have achieved a great success in unsupervised learning. Despi...
This paper studies how well generative adversarial networks (GANs) learn probability distributions f...
Generative adversarial nets (GANs) are very successful at modeling distributions from given samples,...
In this work, we construct generalization bounds to understand existing learning algorithms and prop...
Generative Adversarial Networks (GANs) were proposed in 2014 as a new method efficiently producing r...
We investigate the training and performance of generative adversarial networks using the Maximum Mea...
Statistical divergences play an important role in many data-driven applications. Two notable example...
Deep generative models allow us to learn hidden representations of data and generate new examples. T...
International audienceMinimising upper bounds on the population risk or the generalisation gap has b...
While there has been progress in developing non-vacuous generalization bounds for deep neural networ...
Generative adversarial networks(GAN) are popular Deep learning models that can implicitly learn rich...
Generative adversarial networks (GANs) are a class of generative models, for which the goal is to le...
PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability o...