Variational encoder-decoders (VEDs) have shown promising results in dialogue generation. However, the latent variable distributions are usually approximated by a much simpler model than the powerful RNN structure used for encoding and decoding, yielding the KL-vanishing problem and inconsistent training objective. In this paper, we separate the training step into two phases: The first phase learns to autoencode discrete texts into continuous embeddings, from which the second phase learns to generalize latent representations by reconstructing the encoded embedding. In this case, latent variables are sampled by transforming Gaussian noise through multi-layer perceptrons and are trained with a separate VED model, which has the potential of re...
In the past few years Generative models have become an interesting topic in the field of Machine Lea...
Conditional variational models, using either continuous or discrete latent variables, are powerful f...
Recent successes of open-domain dialogue generation mainly rely on the advances of deep neural netwo...
A key advance in learning generative models is the use of amortized inference distributions that are...
Learning flexible latent representation of observed data is an important precursor for most downstre...
Data scarcity is one of the main obstacles of domain adaptation in spoken language understanding (SL...
Autoencoders are a self-supervised learning system where, during training, the output is an approxim...
Automatic generation of text is an important topic in natural language processing with applications ...
Deep generative models have been wildly successful at learning coherent latent representations for c...
Recently, two approaches, fine-tuning large pre-trained language models and variational training, ha...
International audienceThis paper presents a generative approach to speech enhancement based on a rec...
International audienceThis paper presents a generative approach to speech enhancement based on a rec...
International audienceThis paper presents a generative approach to speech enhancement based on a rec...
International audienceThis paper presents a generative approach to speech enhancement based on a rec...
International audienceThis paper presents a generative approach to speech enhancement based on a rec...
In the past few years Generative models have become an interesting topic in the field of Machine Lea...
Conditional variational models, using either continuous or discrete latent variables, are powerful f...
Recent successes of open-domain dialogue generation mainly rely on the advances of deep neural netwo...
A key advance in learning generative models is the use of amortized inference distributions that are...
Learning flexible latent representation of observed data is an important precursor for most downstre...
Data scarcity is one of the main obstacles of domain adaptation in spoken language understanding (SL...
Autoencoders are a self-supervised learning system where, during training, the output is an approxim...
Automatic generation of text is an important topic in natural language processing with applications ...
Deep generative models have been wildly successful at learning coherent latent representations for c...
Recently, two approaches, fine-tuning large pre-trained language models and variational training, ha...
International audienceThis paper presents a generative approach to speech enhancement based on a rec...
International audienceThis paper presents a generative approach to speech enhancement based on a rec...
International audienceThis paper presents a generative approach to speech enhancement based on a rec...
International audienceThis paper presents a generative approach to speech enhancement based on a rec...
International audienceThis paper presents a generative approach to speech enhancement based on a rec...
In the past few years Generative models have become an interesting topic in the field of Machine Lea...
Conditional variational models, using either continuous or discrete latent variables, are powerful f...
Recent successes of open-domain dialogue generation mainly rely on the advances of deep neural netwo...