Recent advances in NLP are brought by a range of large-scale pretrained language models (PLMs). These PLMs have brought significant performance gains for a range of NLP tasks, circumventing the need to customize complex designs for specific tasks. However, most current work focus on finetuning PLMs on a domain-specific datasets, ignoring the fact that the domain gap can lead to overfitting and even performance drop. Therefore, it is practically important to find an appropriate method to effectively adapt PLMs to a target domain of interest. Recently, a range of methods have been proposed to achieve this purpose. Early surveys on domain adaptation are not suitable for PLMs due to the sophisticated behavior exhibited by PLMs from traditional ...
Recent work has demonstrated that pre-training in-domain language models can boost performance when ...
Pretrained language models (PLMs) are today the primary model for natural language processing. Despi...
The reusability of state-of-the-art Pre-trained Language Models (PLMs) is often limited by their gen...
The performance of a machine learning model trained on labeled data of a (source) domain degrades se...
When a neural language model (LM) is adapted to perform a new task, what aspects of the task predict...
With the fast growth of the amount of digitalized texts in recent years, text information management...
With the fast growth of the amount of digitalized texts in recent years, text information management...
With the fast growth of the amount of digitalized texts in recent years, text information management...
With the fast growth of the amount of digitalized texts in recent years, text information management...
Large language models have transformed the field of natural language processing (NLP). Their improve...
Neural network training has been shown to be advantageous in many natural language processing appli...
Neural network training has been shown to be advantageous in many natural language processing appli...
Neural network training has been shown to be advantageous in many natural language processing appli...
Large language models have transformed the field of natural language processing (NLP). Their improve...
Deep models must learn robust and transferable representations in order to perform well on new domai...
Recent work has demonstrated that pre-training in-domain language models can boost performance when ...
Pretrained language models (PLMs) are today the primary model for natural language processing. Despi...
The reusability of state-of-the-art Pre-trained Language Models (PLMs) is often limited by their gen...
The performance of a machine learning model trained on labeled data of a (source) domain degrades se...
When a neural language model (LM) is adapted to perform a new task, what aspects of the task predict...
With the fast growth of the amount of digitalized texts in recent years, text information management...
With the fast growth of the amount of digitalized texts in recent years, text information management...
With the fast growth of the amount of digitalized texts in recent years, text information management...
With the fast growth of the amount of digitalized texts in recent years, text information management...
Large language models have transformed the field of natural language processing (NLP). Their improve...
Neural network training has been shown to be advantageous in many natural language processing appli...
Neural network training has been shown to be advantageous in many natural language processing appli...
Neural network training has been shown to be advantageous in many natural language processing appli...
Large language models have transformed the field of natural language processing (NLP). Their improve...
Deep models must learn robust and transferable representations in order to perform well on new domai...
Recent work has demonstrated that pre-training in-domain language models can boost performance when ...
Pretrained language models (PLMs) are today the primary model for natural language processing. Despi...
The reusability of state-of-the-art Pre-trained Language Models (PLMs) is often limited by their gen...