Large language models (LLMs) and vision language models (VLMs) demonstrate excellent performance on a wide range of tasks by scaling up parameter counts from O(10^9) to O(10^{12}) levels and further beyond. These large scales make it impossible to adapt and deploy fully specialized models given a task of interest. Parameter-efficient fine-tuning (PEFT) emerges as a promising direction to tackle the adaptation and serving challenges for such large models. We categorize PEFT techniques into two types: intrusive and non-intrusive. Intrusive PEFT techniques directly change a model's internal architecture. Though more flexible, they introduce significant complexities for training and serving. Non-intrusive PEFT techniques leave the internal arch...
Standard fine-tuning of large pre-trained language models (PLMs) for downstream tasks requires updat...
Pre-trained language models (PLMs) have demonstrated impressive performance across various downstrea...
Parameter-efficient fine-tuning (PEFT) methods can adapt large language models to downstream tasks b...
Recent advancements in Large Language Models (LLMs) have enabled the development of a single model c...
We present Generalized LoRA (GLoRA), an advanced approach for universal parameter-efficient fine-tun...
Existing fine-tuning methods either tune all parameters of the pre-trained model (full fine-tuning),...
Fine-tuning large language models for different tasks can be costly and inefficient, and even method...
Adapting pretrained language models to novel domains, such as clinical applications, traditionally i...
Visual Parameter-Efficient Fine-Tuning (PEFT) has become a powerful alternative for full fine-tuning...
Fine-tuning the entire set of parameters of a large pretrained model has become the mainstream appro...
Parameter-efficient fine-tuning methods (PEFTs) offer the promise of adapting large pre-trained mode...
Since the rise of powerful large-scale pre-trained Vision-Language (VL) models, such as CLIP and ALI...
There are growing interests in adapting large-scale language models using parameter-efficient fine-t...
The current modus operandi in adapting pre-trained models involves updating all the backbone paramet...
We introduce BitFit, a sparse-finetuning method where only the bias-terms of the model (or a subset ...
Standard fine-tuning of large pre-trained language models (PLMs) for downstream tasks requires updat...
Pre-trained language models (PLMs) have demonstrated impressive performance across various downstrea...
Parameter-efficient fine-tuning (PEFT) methods can adapt large language models to downstream tasks b...
Recent advancements in Large Language Models (LLMs) have enabled the development of a single model c...
We present Generalized LoRA (GLoRA), an advanced approach for universal parameter-efficient fine-tun...
Existing fine-tuning methods either tune all parameters of the pre-trained model (full fine-tuning),...
Fine-tuning large language models for different tasks can be costly and inefficient, and even method...
Adapting pretrained language models to novel domains, such as clinical applications, traditionally i...
Visual Parameter-Efficient Fine-Tuning (PEFT) has become a powerful alternative for full fine-tuning...
Fine-tuning the entire set of parameters of a large pretrained model has become the mainstream appro...
Parameter-efficient fine-tuning methods (PEFTs) offer the promise of adapting large pre-trained mode...
Since the rise of powerful large-scale pre-trained Vision-Language (VL) models, such as CLIP and ALI...
There are growing interests in adapting large-scale language models using parameter-efficient fine-t...
The current modus operandi in adapting pre-trained models involves updating all the backbone paramet...
We introduce BitFit, a sparse-finetuning method where only the bias-terms of the model (or a subset ...
Standard fine-tuning of large pre-trained language models (PLMs) for downstream tasks requires updat...
Pre-trained language models (PLMs) have demonstrated impressive performance across various downstrea...
Parameter-efficient fine-tuning (PEFT) methods can adapt large language models to downstream tasks b...