Why can pre-trained language models (PLMs) learn universal representations and effectively adapt to broad NLP tasks differing a lot superficially? In this work, we empirically find evidence indicating that the adaptations of PLMs to various few-shot tasks can be reparameterized as optimizing only a few free parameters in a unified low-dimensional intrinsic task subspace, which may help us understand why PLMs could easily adapt to various NLP tasks with small-scale data. To find such a subspace and examine its universality, we propose an analysis pipeline called intrinsic prompt tuning (IPT). Specifically, we resort to the recent success of prompt tuning and decompose the soft prompts of multiple NLP tasks into the same low-dimensional nonli...
In this paper, we move towards combining large parametric models with non-parametric prototypical ne...
Prompt-based learning has been an effective paradigm for large pretrained language models (LLM), ena...
There are growing interests in adapting large-scale language models using parameter-efficient fine-t...
Prompt tuning learns soft prompts to condition frozen Pre-trained Language Models (PLMs) for perform...
When a neural language model (LM) is adapted to perform a new task, what aspects of the task predict...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Pretrained large language models (LLMs) are strong in-context learners that are able to perform few-...
We present a new method LiST is short for Lite Prompted Self-Training for parameter-efficient fine-t...
Prompt learning is a new paradigm in the Natural Language Processing (NLP) field which has shown imp...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
Transformer-based pre-trained models with millions of parameters require large storage. Recent appro...
Through in-context learning (ICL), large-scale language models are effective few-shot learners witho...
Large language models (LLMs), while transformative for NLP, come with significant computational dema...
In recent years, there has been significant progress in developing pre-trained language models for N...
Recent advances in NLP are brought by a range of large-scale pretrained language models (PLMs). Thes...
In this paper, we move towards combining large parametric models with non-parametric prototypical ne...
Prompt-based learning has been an effective paradigm for large pretrained language models (LLM), ena...
There are growing interests in adapting large-scale language models using parameter-efficient fine-t...
Prompt tuning learns soft prompts to condition frozen Pre-trained Language Models (PLMs) for perform...
When a neural language model (LM) is adapted to perform a new task, what aspects of the task predict...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Pretrained large language models (LLMs) are strong in-context learners that are able to perform few-...
We present a new method LiST is short for Lite Prompted Self-Training for parameter-efficient fine-t...
Prompt learning is a new paradigm in the Natural Language Processing (NLP) field which has shown imp...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
Transformer-based pre-trained models with millions of parameters require large storage. Recent appro...
Through in-context learning (ICL), large-scale language models are effective few-shot learners witho...
Large language models (LLMs), while transformative for NLP, come with significant computational dema...
In recent years, there has been significant progress in developing pre-trained language models for N...
Recent advances in NLP are brought by a range of large-scale pretrained language models (PLMs). Thes...
In this paper, we move towards combining large parametric models with non-parametric prototypical ne...
Prompt-based learning has been an effective paradigm for large pretrained language models (LLM), ena...
There are growing interests in adapting large-scale language models using parameter-efficient fine-t...