In this paper, we propose a variational autoencoder with disentanglement priors, VAE-DPRIOR, for task-specific natural language generation with none or a handful of task-specific labeled examples. In order to tackle compositional generalization across tasks, our model performs disentangled representation learning by introducing a conditional prior for the latent content space and another conditional prior for the latent label space. Both types of priors satisfy a novel property called $\epsilon$-disentangled. We show both empirically and theoretically that the novel priors can disentangle representations even without specific regularizations as in the prior work. The content prior enables directly sampling diverse content representations fr...
Variational AutoEncoders (VAEs) provide a means to generate representational latent embeddings. Prev...
International audienceLarge language models have recently been shown to attain reasonable zero-shot ...
AbstractWe present a new model of natural language processing in which natural language parsing and ...
In this paper, we propose a variational autoencoder with disentanglement priors, VAE-DPRIOR, for tas...
Automatic generation of text is an important topic in natural language processing with applications ...
Variational autoencoders (VAEs) are a neural network architecture broadly used in image generation (...
Variational autoencoders (VAEs) are a neural network architecture broadly used in image generation (...
We develop a generalisation of disentanglement in variational autoencoders (VAEs)—decomposition of t...
A large part of the literature on learning disentangled representations focuses on variational autoe...
We develop a generalisation of disentanglement in variational autoencoders (VAEs)—decomposition of t...
In natural language processing (NLP), Transformer is widely used and has reached the state-of-the-ar...
In this paper we propose a conditional generative modelling (CGM) approach for unsupervised disentan...
Large-scale neural language models have made impressive strides in natural language generation. Howe...
International audienceLinking neural representations to linguistic factors is crucial in order to bu...
Variational AutoEncoders (VAEs) provide a means to generate representational latent embeddings. Pre...
Variational AutoEncoders (VAEs) provide a means to generate representational latent embeddings. Prev...
International audienceLarge language models have recently been shown to attain reasonable zero-shot ...
AbstractWe present a new model of natural language processing in which natural language parsing and ...
In this paper, we propose a variational autoencoder with disentanglement priors, VAE-DPRIOR, for tas...
Automatic generation of text is an important topic in natural language processing with applications ...
Variational autoencoders (VAEs) are a neural network architecture broadly used in image generation (...
Variational autoencoders (VAEs) are a neural network architecture broadly used in image generation (...
We develop a generalisation of disentanglement in variational autoencoders (VAEs)—decomposition of t...
A large part of the literature on learning disentangled representations focuses on variational autoe...
We develop a generalisation of disentanglement in variational autoencoders (VAEs)—decomposition of t...
In natural language processing (NLP), Transformer is widely used and has reached the state-of-the-ar...
In this paper we propose a conditional generative modelling (CGM) approach for unsupervised disentan...
Large-scale neural language models have made impressive strides in natural language generation. Howe...
International audienceLinking neural representations to linguistic factors is crucial in order to bu...
Variational AutoEncoders (VAEs) provide a means to generate representational latent embeddings. Pre...
Variational AutoEncoders (VAEs) provide a means to generate representational latent embeddings. Prev...
International audienceLarge language models have recently been shown to attain reasonable zero-shot ...
AbstractWe present a new model of natural language processing in which natural language parsing and ...