Speech representations learned from Self-supervised learning (SSL) models can benefit various speech processing tasks. However, utilizing SSL representations usually requires fine-tuning the pre-trained models or designing task-specific downstream models and loss functions, causing much memory usage and human labor. Recently, prompting in Natural Language Processing (NLP) has been found to be an efficient technique to leverage pre-trained language models (LMs). Specifically, prompt tuning optimizes a limited number of task-specific parameters with a fixed pre-trained model; as a result, only a small set of parameters is needed to be stored for each task. Prompt tuning improves computation and memory efficiency by leveraging the pre-trained ...
Self-supervised representation learning (SSRL) has improved the performance on downstream phoneme re...
Neural language models have drastically changed the landscape of natural language processing (NLP). ...
Why can pre-trained language models (PLMs) learn universal representations and effectively adapt to ...
Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural ...
Ever since the development of GPT-3 in the natural language processing (NLP) field, in-context learn...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
Prompting and adapter tuning have emerged as efficient alternatives to fine-tuning (FT) methods. How...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Prompt learning is a new paradigm in the Natural Language Processing (NLP) field which has shown imp...
Self-supervised learning (SSL) achieves great success in speech recognition, while limited explorati...
Automatic Speech Recognition (ASR) systems have found their use in numerous industrial applications ...
Self-Supervised Learning (SSL) using huge unlabeled data has been successfully explored for image an...
In recent years, there has been significant progress in developing pre-trained language models for N...
Self-supervised learning (SSL) for rich speech representations has achieved empirical success in low...
The advent of hyper-scale and general-purpose pre-trained models is shifting the paradigm of buildin...
Self-supervised representation learning (SSRL) has improved the performance on downstream phoneme re...
Neural language models have drastically changed the landscape of natural language processing (NLP). ...
Why can pre-trained language models (PLMs) learn universal representations and effectively adapt to ...
Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural ...
Ever since the development of GPT-3 in the natural language processing (NLP) field, in-context learn...
Recent works have shown promising results of prompt tuning in stimulating pre-trained language model...
Prompting and adapter tuning have emerged as efficient alternatives to fine-tuning (FT) methods. How...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Prompt learning is a new paradigm in the Natural Language Processing (NLP) field which has shown imp...
Self-supervised learning (SSL) achieves great success in speech recognition, while limited explorati...
Automatic Speech Recognition (ASR) systems have found their use in numerous industrial applications ...
Self-Supervised Learning (SSL) using huge unlabeled data has been successfully explored for image an...
In recent years, there has been significant progress in developing pre-trained language models for N...
Self-supervised learning (SSL) for rich speech representations has achieved empirical success in low...
The advent of hyper-scale and general-purpose pre-trained models is shifting the paradigm of buildin...
Self-supervised representation learning (SSRL) has improved the performance on downstream phoneme re...
Neural language models have drastically changed the landscape of natural language processing (NLP). ...
Why can pre-trained language models (PLMs) learn universal representations and effectively adapt to ...