With the dramatically increased number of parameters in language models, sparsity methods have received ever-increasing research focus to compress and accelerate the models. While most research focuses on how to accurately retain appropriate weights while maintaining the performance of the compressed model, there are challenges in the computational overhead and memory footprint of sparse training when compressing large-scale language models. To address this problem, we propose a Parameter-efficient Sparse Training (PST) method to reduce the number of trainable parameters during sparse-aware training in downstream tasks. Specifically, we first combine the data-free and data-driven criteria to efficiently and accurately measure the importance...
Transformer-based language models have become a key building block for natural language processing. ...
A recent result in compressed sensing (CS) allows us to perform non-parametric speech recognition t...
Overparameterized neural networks generalize well but are expensive to train. Ideally, one would lik...
The pre-training and fine-tuning paradigm has contributed to a number of breakthroughs in Natural La...
Large Language Models have become the core architecture upon which most modern natural language proc...
The training of sparse neural networks is becoming an increasingly important tool for reducing the ...
Gigantic pre-trained models have become central to natural language processing (NLP), serving as the...
Sparsity has become one of the promising methods to compress and accelerate Deep Neural Networks (DN...
Model sparsification in deep learning promotes simpler, more interpretable models with fewer paramet...
The growing energy and performance costs of deep learning have driven the community to reduce the si...
Pre-trained Language Models (PLMs) have achieved great success in various Natural Language Processin...
Scaling language models with more data, compute and parameters has driven significant progress in na...
We consider the problem of accurate sparse fine-tuning of large language models (LLMs), that is, fin...
Deep learning has been empirically successful in recent years thanks to the extremely over-parameter...
Progress in Machine Learning is being driven by continued growth in model size, training data and al...
Transformer-based language models have become a key building block for natural language processing. ...
A recent result in compressed sensing (CS) allows us to perform non-parametric speech recognition t...
Overparameterized neural networks generalize well but are expensive to train. Ideally, one would lik...
The pre-training and fine-tuning paradigm has contributed to a number of breakthroughs in Natural La...
Large Language Models have become the core architecture upon which most modern natural language proc...
The training of sparse neural networks is becoming an increasingly important tool for reducing the ...
Gigantic pre-trained models have become central to natural language processing (NLP), serving as the...
Sparsity has become one of the promising methods to compress and accelerate Deep Neural Networks (DN...
Model sparsification in deep learning promotes simpler, more interpretable models with fewer paramet...
The growing energy and performance costs of deep learning have driven the community to reduce the si...
Pre-trained Language Models (PLMs) have achieved great success in various Natural Language Processin...
Scaling language models with more data, compute and parameters has driven significant progress in na...
We consider the problem of accurate sparse fine-tuning of large language models (LLMs), that is, fin...
Deep learning has been empirically successful in recent years thanks to the extremely over-parameter...
Progress in Machine Learning is being driven by continued growth in model size, training data and al...
Transformer-based language models have become a key building block for natural language processing. ...
A recent result in compressed sensing (CS) allows us to perform non-parametric speech recognition t...
Overparameterized neural networks generalize well but are expensive to train. Ideally, one would lik...