In this paper, we propose a variational autoencoder with disentanglement priors, VAE-DPRIOR, for task-specific natural language generation with none or a handful of task-specific labeled examples. In order to tackle compositional generalization across tasks, our model performs disentangled representation learning by introducing a conditional prior for the latent content space and another conditional prior for the latent label space. Both types of priors satisfy a novel property called ϵ-disentangled. We show both empirically and theoretically that the novel priors can disentangle representations even without specific regularizations as in the prior work. The content prior enables directly sampling diverse content representations from the co...
International audienceLinking neural representations to linguistic factors is crucial in order to bu...
We would like to learn a representation of the data that reflects the semantics behind a specific gr...
International audienceTwo recent works have shown the benefit of modeling both high-level factors an...
In this paper, we propose a variational autoencoder with disentanglement priors, VAE-DPRIOR, for tas...
A large part of the literature on learning disentangled representations focuses on variational autoe...
In this paper we propose a conditional generative modelling (CGM) approach for unsupervised disentan...
We develop a generalisation of disentanglement in variational autoencoders (VAEs)—decomposition of t...
We develop a generalisation of disentanglement in variational autoencoders (VAEs)—decomposition of t...
Variational AutoEncoders (VAEs) provide a means to generate representational latent embeddings. Prev...
Variational AutoEncoders (VAEs) provide a means to generate representational latent embeddings. Pre...
Variational autoencoders (VAEs) are a neural network architecture broadly used in image generation (...
Variational autoencoders (VAEs) are a neural network architecture broadly used in image generation (...
In natural language processing (NLP), Transformer is widely used and has reached the state-of-the-ar...
Automatic generation of text is an important topic in natural language processing with applications ...
Improving controllability or the ability to manipulate one or more attributes of the generated data ...
International audienceLinking neural representations to linguistic factors is crucial in order to bu...
We would like to learn a representation of the data that reflects the semantics behind a specific gr...
International audienceTwo recent works have shown the benefit of modeling both high-level factors an...
In this paper, we propose a variational autoencoder with disentanglement priors, VAE-DPRIOR, for tas...
A large part of the literature on learning disentangled representations focuses on variational autoe...
In this paper we propose a conditional generative modelling (CGM) approach for unsupervised disentan...
We develop a generalisation of disentanglement in variational autoencoders (VAEs)—decomposition of t...
We develop a generalisation of disentanglement in variational autoencoders (VAEs)—decomposition of t...
Variational AutoEncoders (VAEs) provide a means to generate representational latent embeddings. Prev...
Variational AutoEncoders (VAEs) provide a means to generate representational latent embeddings. Pre...
Variational autoencoders (VAEs) are a neural network architecture broadly used in image generation (...
Variational autoencoders (VAEs) are a neural network architecture broadly used in image generation (...
In natural language processing (NLP), Transformer is widely used and has reached the state-of-the-ar...
Automatic generation of text is an important topic in natural language processing with applications ...
Improving controllability or the ability to manipulate one or more attributes of the generated data ...
International audienceLinking neural representations to linguistic factors is crucial in order to bu...
We would like to learn a representation of the data that reflects the semantics behind a specific gr...
International audienceTwo recent works have shown the benefit of modeling both high-level factors an...