In this paper, we address the problem of conditional modality learning, whereby one is interested in generating one modality given the other. While it is straightforward to learn a joint distribution over multiple modalities using a deep multimodal architecture, we observe that such models are not very effective at conditional generation. Hence, we address the problem by learning conditional distributions between the modalities. We use variational methods for maximizing the corresponding conditional log-likelihood. The resultant deep model, which we refer to as conditional multimodal autoencoder (CMMA), forces the latent representation obtained from a single modality alone to be `close' to the joint representation obtained from multiple mod...
A number of variational autoencoders (VAEs) have recently emerged with the aim of modeling multimoda...
International audienceModel selection methods based on stochastic regularization such as Dropout ha...
In practical machine learning settings, there often exist relations or links between data from diffe...
In this paper, we address the problem of conditional modality learning, whereby one is interested in...
Deep generative models with latent variables have been used lately to learn joint representations an...
One of the major shortcomings of variational autoencoders is the inability to produce generations fr...
Multimodal generative models learn a joint distribution over multiple modalities and thus have the p...
Multimodal learning is a framework for building models that make predictions based on different type...
Multi-Entity Dependence Learning (MEDL) explores conditional correlations among multiple entities. T...
Devising deep latent variable models for multi-modal data has been a long-standing theme in machine ...
In this paper we propose a conditional generative modelling (CGM) approach for unsupervised disentan...
Generative models have recently shown the ability to realistically generate data and model the distr...
We introduce Multi-Conditional Learning, a framework for optimizing graphical models based not on jo...
A Deep Boltzmann Machine is described for learning a generative model of data that consists of multi...
Class-conditional generative models are crucial tools for data generation from user-specified class ...
A number of variational autoencoders (VAEs) have recently emerged with the aim of modeling multimoda...
International audienceModel selection methods based on stochastic regularization such as Dropout ha...
In practical machine learning settings, there often exist relations or links between data from diffe...
In this paper, we address the problem of conditional modality learning, whereby one is interested in...
Deep generative models with latent variables have been used lately to learn joint representations an...
One of the major shortcomings of variational autoencoders is the inability to produce generations fr...
Multimodal generative models learn a joint distribution over multiple modalities and thus have the p...
Multimodal learning is a framework for building models that make predictions based on different type...
Multi-Entity Dependence Learning (MEDL) explores conditional correlations among multiple entities. T...
Devising deep latent variable models for multi-modal data has been a long-standing theme in machine ...
In this paper we propose a conditional generative modelling (CGM) approach for unsupervised disentan...
Generative models have recently shown the ability to realistically generate data and model the distr...
We introduce Multi-Conditional Learning, a framework for optimizing graphical models based not on jo...
A Deep Boltzmann Machine is described for learning a generative model of data that consists of multi...
Class-conditional generative models are crucial tools for data generation from user-specified class ...
A number of variational autoencoders (VAEs) have recently emerged with the aim of modeling multimoda...
International audienceModel selection methods based on stochastic regularization such as Dropout ha...
In practical machine learning settings, there often exist relations or links between data from diffe...