Thesis (Master's)--University of Washington, 2020This thesis presents a study that was designed to test the effect of generative adversarial network (GAN) training on the quality of natural language generation (NLG) using a pre-trained language model architecture: Bidirectional Encoder Representations from Transformers (BERT). Perplexity and BLEU scores were used as metrics for evaluation on 1000 samples of generated text. Results indicated that perplexity decreased and BLEU scores comparing the original data distributions increased; thus, there was evidence that quality of NLG was improved by the introduction of GAN training. This alternative training method may also be effective for other more state-of-the-art pre-trained architectures
GANs (generative opposing networks) are a technique for learning deep representations in the absence...
GANs (generative opposing networks) are a technique for learning deep representations in the absence...
Automated evaluation of open domain natural language generation (NLG) models remains a challenge and...
Thesis (Ph.D.)--University of Washington, 2020Natural language generation plays an important role in...
In the past few years, generative adversarial networks (GANs) have become increasingly important in ...
Transfer learning is to apply knowledge or patterns learned in a particular field or task to differe...
The ability to learn robust, resizable feature representations from unlabeled data has potential app...
The ability to learn robust, resizable feature representations from unlabeled data has potential app...
Text-to-text generation is a fundamental task in natural language processing. Traditional models rel...
Text Generation is a pressing topic of Natural Language Processing that involves the prediction of u...
Text Generation is a pressing topic of Natural Language Processing that involves the prediction of u...
Since the first bidirectional deep learn- ing model for natural language understanding, BERT, emerge...
The article is an essay on the development of technologies for natural language processing, which fo...
Generative Adversarial Networks (GANs) have proven to be efficient systems for data generation and o...
Training generative models that can generate high-quality text with sufficient diversity is an impor...
GANs (generative opposing networks) are a technique for learning deep representations in the absence...
GANs (generative opposing networks) are a technique for learning deep representations in the absence...
Automated evaluation of open domain natural language generation (NLG) models remains a challenge and...
Thesis (Ph.D.)--University of Washington, 2020Natural language generation plays an important role in...
In the past few years, generative adversarial networks (GANs) have become increasingly important in ...
Transfer learning is to apply knowledge or patterns learned in a particular field or task to differe...
The ability to learn robust, resizable feature representations from unlabeled data has potential app...
The ability to learn robust, resizable feature representations from unlabeled data has potential app...
Text-to-text generation is a fundamental task in natural language processing. Traditional models rel...
Text Generation is a pressing topic of Natural Language Processing that involves the prediction of u...
Text Generation is a pressing topic of Natural Language Processing that involves the prediction of u...
Since the first bidirectional deep learn- ing model for natural language understanding, BERT, emerge...
The article is an essay on the development of technologies for natural language processing, which fo...
Generative Adversarial Networks (GANs) have proven to be efficient systems for data generation and o...
Training generative models that can generate high-quality text with sufficient diversity is an impor...
GANs (generative opposing networks) are a technique for learning deep representations in the absence...
GANs (generative opposing networks) are a technique for learning deep representations in the absence...
Automated evaluation of open domain natural language generation (NLG) models remains a challenge and...