Visual Parameter-Efficient Fine-Tuning (PEFT) has become a powerful alternative for full fine-tuning so as to adapt pre-trained vision models to downstream tasks, which only tunes a small number of parameters while freezing the vast majority ones to ease storage burden and optimization difficulty. However, existing PEFT methods introduce trainable parameters to the same positions across different tasks depending solely on human heuristics and neglect the domain gaps. To this end, we study where to introduce and how to allocate trainable parameters by proposing a novel Sensitivity-aware visual Parameter-efficient fine-Tuning (SPT) scheme, which adaptively allocates trainable parameters to task-specific important positions given a desired tun...
In recent years, convolutional neural networks have achieved state-of-the-art performance in a numbe...
As a dominant paradigm, fine-tuning a pre-trained model on the target data is widely used in many de...
Recent advancements in Large Language Models (LLMs) have enabled the development of a single model c...
Existing fine-tuning methods either tune all parameters of the pre-trained model (full fine-tuning),...
The current modus operandi in adapting pre-trained models involves updating all the backbone paramet...
Large language models (LLMs) and vision language models (VLMs) demonstrate excellent performance on ...
Recent advancements have illuminated the efficacy of some tensorization-decomposition Parameter-Effi...
Current state-of-the-art results in computer vision depend in part on fine-tuning large pre-trained ...
We present Generalized LoRA (GLoRA), an advanced approach for universal parameter-efficient fine-tun...
Adapting large-scale pretrained models to various downstream tasks via fine-tuning is a standard met...
Large pre-trained language models have recently gained significant traction due to their improved pe...
Parameter-efficient fine-tuning (PEFT) methods can adapt large language models to downstream tasks b...
Parameter-efficient fine-tuning (PEFT) has shown its effectiveness in adapting the pre-trained langu...
Recent work has explored the potential to adapt a pre-trained vision transformer (ViT) by updating o...
Transfer learning has become a popular task adaptation method in the era of foundation models. Howev...
In recent years, convolutional neural networks have achieved state-of-the-art performance in a numbe...
As a dominant paradigm, fine-tuning a pre-trained model on the target data is widely used in many de...
Recent advancements in Large Language Models (LLMs) have enabled the development of a single model c...
Existing fine-tuning methods either tune all parameters of the pre-trained model (full fine-tuning),...
The current modus operandi in adapting pre-trained models involves updating all the backbone paramet...
Large language models (LLMs) and vision language models (VLMs) demonstrate excellent performance on ...
Recent advancements have illuminated the efficacy of some tensorization-decomposition Parameter-Effi...
Current state-of-the-art results in computer vision depend in part on fine-tuning large pre-trained ...
We present Generalized LoRA (GLoRA), an advanced approach for universal parameter-efficient fine-tun...
Adapting large-scale pretrained models to various downstream tasks via fine-tuning is a standard met...
Large pre-trained language models have recently gained significant traction due to their improved pe...
Parameter-efficient fine-tuning (PEFT) methods can adapt large language models to downstream tasks b...
Parameter-efficient fine-tuning (PEFT) has shown its effectiveness in adapting the pre-trained langu...
Recent work has explored the potential to adapt a pre-trained vision transformer (ViT) by updating o...
Transfer learning has become a popular task adaptation method in the era of foundation models. Howev...
In recent years, convolutional neural networks have achieved state-of-the-art performance in a numbe...
As a dominant paradigm, fine-tuning a pre-trained model on the target data is widely used in many de...
Recent advancements in Large Language Models (LLMs) have enabled the development of a single model c...