International audienceConditional Generative Models are now acknowledged an essential tool in Machine Learning. This paper focuses on their control. While many approaches aim at disentangling the data through the coordinate-wise control of their latent representations, another direction is explored in this paper. The proposed CompVAE handles data with a natural multi-ensemblist structure (i.e. that can naturally be decomposed into elements). Derived from Bayesian variational principles, CompVAE learns a latent representation leveraging both observational and symbolic information. A first contribution of the approach is that this latent representation supports a compositional generative model, amenable to multi-ensemblist operations (additio...
In this paper, we propose a novel structure for a multi-modal data association referred to as Associ...
We investigate the problem of learning representations that are invariant to cer-tain nuisance or se...
Multimodal VAEs have recently gained attention as efficient models for weakly-supervised generative ...
International audienceConditional Generative Models are now acknowledged an essential tool in Machin...
International audienceConditional Generative Models are now acknowledged an essential tool in Machin...
We propose a method for learning the dependency structure between latent variables in deep latent va...
Recent advances in applying deep neural networks to Bayesian Modelling, sparked resurgence of in- te...
Deep generative models have been wildly successful at learning coherent latent representations for c...
We use the variational auto-encoders (VAE) to transform the set of finite automata into acontinuous ...
Learning flexible latent representation of observed data is an important precursor for most downstre...
In this paper we propose a conditional generative modelling (CGM) approach for unsupervised disentan...
We investigate the problem of learning representations that are invariant to certain nuisance or sen...
We would like to learn a representation of the data that reflects the semantics behind a specific gr...
Variational autoencoders (VAEs) is a strong family of deep generative models based on variational in...
This paper explores two useful modifications of the recent variational autoencoder (VAE), a popular ...
In this paper, we propose a novel structure for a multi-modal data association referred to as Associ...
We investigate the problem of learning representations that are invariant to cer-tain nuisance or se...
Multimodal VAEs have recently gained attention as efficient models for weakly-supervised generative ...
International audienceConditional Generative Models are now acknowledged an essential tool in Machin...
International audienceConditional Generative Models are now acknowledged an essential tool in Machin...
We propose a method for learning the dependency structure between latent variables in deep latent va...
Recent advances in applying deep neural networks to Bayesian Modelling, sparked resurgence of in- te...
Deep generative models have been wildly successful at learning coherent latent representations for c...
We use the variational auto-encoders (VAE) to transform the set of finite automata into acontinuous ...
Learning flexible latent representation of observed data is an important precursor for most downstre...
In this paper we propose a conditional generative modelling (CGM) approach for unsupervised disentan...
We investigate the problem of learning representations that are invariant to certain nuisance or sen...
We would like to learn a representation of the data that reflects the semantics behind a specific gr...
Variational autoencoders (VAEs) is a strong family of deep generative models based on variational in...
This paper explores two useful modifications of the recent variational autoencoder (VAE), a popular ...
In this paper, we propose a novel structure for a multi-modal data association referred to as Associ...
We investigate the problem of learning representations that are invariant to cer-tain nuisance or se...
Multimodal VAEs have recently gained attention as efficient models for weakly-supervised generative ...