Adopting a two-stage paradigm of pretraining followed by fine-tuning, Pretrained Language Models (PLMs) have achieved substantial advancements in the field of natural language processing. However, in real-world scenarios, data labels are often noisy due to the complex annotation process, making it essential to develop strategies for fine-tuning PLMs with such noisy labels. To this end, we introduce an innovative approach for fine-tuning PLMs using noisy labels, which incorporates the guidance of Large Language Models (LLMs) like ChatGPT. This guidance assists in accurately distinguishing between clean and noisy samples and provides supplementary information beyond the noisy labels, thereby boosting the learning process during fine-tuning PL...
Sparse Mixture-of-Experts (MoE) is a neural architecture design that can be utilized to add learnabl...
Pre-trained language models (PLMs) have demonstrated impressive performance across various downstrea...
Recently, Instruction fine-tuning has risen to prominence as a potential method for enhancing the ze...
The reusability of state-of-the-art Pre-trained Language Models (PLMs) is often limited by their gen...
Language model fine-tuning is essential for modern natural language processing, but is computational...
We introduce BitFit, a sparse-finetuning method where only the bias-terms of the model (or a subset ...
In this paper, we move towards combining large parametric models with non-parametric prototypical ne...
Widely used language models (LMs) are typically built by scaling up a two-stage training pipeline: a...
The advent of large-scale pre-trained language models has contributed greatly to the recent progress...
While pre-trained language model (PLM) fine-tuning has achieved strong performance in many NLP tasks...
Noisy labels in training data present a challenging issue in classification tasks, misleading a mode...
Pretrained large language models (LLMs) are strong in-context learners that are able to perform few-...
The pre-training and fine-tuning paradigm has contributed to a number of breakthroughs in Natural La...
We present a new method LiST is short for Lite Prompted Self-Training for parameter-efficient fine-t...
Crowdsourcing platforms are often used to collect datasets for training machine learning models, des...
Sparse Mixture-of-Experts (MoE) is a neural architecture design that can be utilized to add learnabl...
Pre-trained language models (PLMs) have demonstrated impressive performance across various downstrea...
Recently, Instruction fine-tuning has risen to prominence as a potential method for enhancing the ze...
The reusability of state-of-the-art Pre-trained Language Models (PLMs) is often limited by their gen...
Language model fine-tuning is essential for modern natural language processing, but is computational...
We introduce BitFit, a sparse-finetuning method where only the bias-terms of the model (or a subset ...
In this paper, we move towards combining large parametric models with non-parametric prototypical ne...
Widely used language models (LMs) are typically built by scaling up a two-stage training pipeline: a...
The advent of large-scale pre-trained language models has contributed greatly to the recent progress...
While pre-trained language model (PLM) fine-tuning has achieved strong performance in many NLP tasks...
Noisy labels in training data present a challenging issue in classification tasks, misleading a mode...
Pretrained large language models (LLMs) are strong in-context learners that are able to perform few-...
The pre-training and fine-tuning paradigm has contributed to a number of breakthroughs in Natural La...
We present a new method LiST is short for Lite Prompted Self-Training for parameter-efficient fine-t...
Crowdsourcing platforms are often used to collect datasets for training machine learning models, des...
Sparse Mixture-of-Experts (MoE) is a neural architecture design that can be utilized to add learnabl...
Pre-trained language models (PLMs) have demonstrated impressive performance across various downstrea...
Recently, Instruction fine-tuning has risen to prominence as a potential method for enhancing the ze...