Recent works have shown promising results of prompt tuning in stimulating pre-trained language models (PLMs) for natural language processing (NLP) tasks. However, to the best of our knowledge, existing works focus on prompt-tuning generative PLMs that are pre-trained to generate target tokens, such as BERT. It is still unknown whether and how discriminative PLMs, e.g., ELECTRA, can be effectively prompt-tuned. In this work, we present DPT, the first prompt tuning framework for discriminative PLMs, which reformulates NLP tasks into a discriminative language modeling problem. Comprehensive experiments on text classification and question answering show that, compared with vanilla fine-tuning, DPT achieves significantly higher performance, and ...
The advent of large-scale Pretrained Language Models (PLM) in the field of Natural Language Process...
Comunicació presentada a la Conference on Empirical Methods in Natural Language Processing (EMNLP 20...
Pre-trained masked language models successfully perform few-shot learning by formulating downstream ...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Large-scale pre-trained language models have contributed significantly to natural language processin...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
Prompting has shown impressive success in enabling large pretrained language models (LMs) to perform...
This paper explores the effectiveness of prompt programming in the fine-tuning process of a Hungaria...
Natural language processing (NLP) techniques had significantly improved by introducing pre-trained l...
Prompt learning is a new paradigm in the Natural Language Processing (NLP) field which has shown imp...
Text Generation aims to produce plausible and readable text in a human language from input data. The...
In recent years, there has been significant progress in developing pre-trained language models for N...
In this project, we want to explore the newly emerging field of prompt engineering and apply it to t...
Recent research has shown that large language models pretrained using unsupervised approaches can ac...
The advent of large-scale Pretrained Language Models (PLM) in the field of Natural Language Process...
Comunicació presentada a la Conference on Empirical Methods in Natural Language Processing (EMNLP 20...
Pre-trained masked language models successfully perform few-shot learning by formulating downstream ...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Large-scale pre-trained language models have contributed significantly to natural language processin...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
Prompting has shown impressive success in enabling large pretrained language models (LMs) to perform...
This paper explores the effectiveness of prompt programming in the fine-tuning process of a Hungaria...
Natural language processing (NLP) techniques had significantly improved by introducing pre-trained l...
Prompt learning is a new paradigm in the Natural Language Processing (NLP) field which has shown imp...
Text Generation aims to produce plausible and readable text in a human language from input data. The...
In recent years, there has been significant progress in developing pre-trained language models for N...
In this project, we want to explore the newly emerging field of prompt engineering and apply it to t...
Recent research has shown that large language models pretrained using unsupervised approaches can ac...
The advent of large-scale Pretrained Language Models (PLM) in the field of Natural Language Process...
Comunicació presentada a la Conference on Empirical Methods in Natural Language Processing (EMNLP 20...
Pre-trained masked language models successfully perform few-shot learning by formulating downstream ...