Generative modeling and inference are two broad categories in unsupervised learning whose goal is to answer the following questions, respectively: 1. Given a dataset, how do we (either implicitly or explicitly) model the underlying probability distribution from which the data came and draw samples from that distribution? 2. How can we learn an underlying abstract representation of the data? In this dissertation we provide three studies that each in a different way improve upon specific generative modeling and inference techniques. First, we develop a state-of-the-art estimator of a generic probability distribution's partition function, or normalizing constant, during simulated tempering. We then apply our estimator to the specific case of ...
Evaluation of generative models is mostly based on the comparison between the estimated distribution...
International audienceUnsupervised learning of generative models has seen tremendous progress over r...
We introduce a novel training principle for probabilistic models that is an alternative to maximum l...
Statistical models which allow generating simulations without providing access to the density of the...
Understanding the impact of data structure on the computational tractability of learning is a key ch...
Deep Latent Variable Models are generative models combining Bayesian Networks and deep learning, ill...
We introduce a novel training principle for generative probabilistic models that is an al-ternative ...
Learning is the ability to generalise beyond training examples; but because many generalisations are...
Generative models have been one of the major research fields in unsupervised deep learning during t...
This paper proposes a new type of generative model that is able to quickly learn a latent representa...
We apply tools from the classical statistical learning theory to analyze theoretical properties of m...
Understanding the impact of data structure on the computational tractability of learning is a key ch...
This thesis studies the problem of regularizing and optimizing generative models, often using insigh...
Probabilistic graphical modeling (PGM) provides a framework for formulating an interpretable generat...
Our paper deals with inferring simulator-based statistical models given some observed data. A simula...
Evaluation of generative models is mostly based on the comparison between the estimated distribution...
International audienceUnsupervised learning of generative models has seen tremendous progress over r...
We introduce a novel training principle for probabilistic models that is an alternative to maximum l...
Statistical models which allow generating simulations without providing access to the density of the...
Understanding the impact of data structure on the computational tractability of learning is a key ch...
Deep Latent Variable Models are generative models combining Bayesian Networks and deep learning, ill...
We introduce a novel training principle for generative probabilistic models that is an al-ternative ...
Learning is the ability to generalise beyond training examples; but because many generalisations are...
Generative models have been one of the major research fields in unsupervised deep learning during t...
This paper proposes a new type of generative model that is able to quickly learn a latent representa...
We apply tools from the classical statistical learning theory to analyze theoretical properties of m...
Understanding the impact of data structure on the computational tractability of learning is a key ch...
This thesis studies the problem of regularizing and optimizing generative models, often using insigh...
Probabilistic graphical modeling (PGM) provides a framework for formulating an interpretable generat...
Our paper deals with inferring simulator-based statistical models given some observed data. A simula...
Evaluation of generative models is mostly based on the comparison between the estimated distribution...
International audienceUnsupervised learning of generative models has seen tremendous progress over r...
We introduce a novel training principle for probabilistic models that is an alternative to maximum l...