Ever since the development of GPT-3 in the natural language processing (NLP) field, in-context learning (ICL) has played an important role in utilizing large language models (LLMs). By presenting the LM utterance-label demonstrations at the input, the LM can accomplish few-shot learning without relying on gradient descent or requiring explicit modification of its parameters. This enables the LM to learn and adapt in a black-box manner. Despite the success of ICL in NLP, little work is exploring the possibility of ICL in speech processing. This study proposes the first exploration of ICL with a speech LM without text supervision. We first show that the current speech LM does not have the ICL capability. With the proposed warmup training, the...
Through in-context learning (ICL), large-scale language models are effective few-shot learners witho...
An end-to-end (E2E) ASR model implicitly learns a prior Internal Language Model (ILM) from the train...
Self-supervised learning (SSL) achieves great success in speech recognition, while limited explorati...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
In-context learning (ICL) i.e. showing LLMs only a few task-specific demonstrations has led to downs...
In-Context Learning (ICL) over Large language models (LLMs) aims at solving previously unseen tasks ...
In-context learning is a recent paradigm in natural language understanding, where a large pre-traine...
Through in-context learning (ICL), large-scale language models are effective few-shot learners witho...
In-context learning (ICL) has become the default method for using large language models (LLMs), maki...
With a handful of demonstration examples, large-scale language models show strong capability to perf...
Large language models (LMs) such as GPT-3 have the surprising ability to do in-context learning, whe...
Large language models have exhibited emergent abilities, demonstrating exceptional performance acros...
Pre-trained models of source code have gained widespread popularity in many code intelligence tasks....
Large language models (LMs) are able to in-context learn -- perform a new task via inference alone b...
Advances in self-supervised learning have significantly reduced the amount of transcribed audio requ...
Through in-context learning (ICL), large-scale language models are effective few-shot learners witho...
An end-to-end (E2E) ASR model implicitly learns a prior Internal Language Model (ILM) from the train...
Self-supervised learning (SSL) achieves great success in speech recognition, while limited explorati...
Speech representations learned from Self-supervised learning (SSL) models can benefit various speech...
In-context learning (ICL) i.e. showing LLMs only a few task-specific demonstrations has led to downs...
In-Context Learning (ICL) over Large language models (LLMs) aims at solving previously unseen tasks ...
In-context learning is a recent paradigm in natural language understanding, where a large pre-traine...
Through in-context learning (ICL), large-scale language models are effective few-shot learners witho...
In-context learning (ICL) has become the default method for using large language models (LLMs), maki...
With a handful of demonstration examples, large-scale language models show strong capability to perf...
Large language models (LMs) such as GPT-3 have the surprising ability to do in-context learning, whe...
Large language models have exhibited emergent abilities, demonstrating exceptional performance acros...
Pre-trained models of source code have gained widespread popularity in many code intelligence tasks....
Large language models (LMs) are able to in-context learn -- perform a new task via inference alone b...
Advances in self-supervised learning have significantly reduced the amount of transcribed audio requ...
Through in-context learning (ICL), large-scale language models are effective few-shot learners witho...
An end-to-end (E2E) ASR model implicitly learns a prior Internal Language Model (ILM) from the train...
Self-supervised learning (SSL) achieves great success in speech recognition, while limited explorati...