When a neural language model (LM) is adapted to perform a new task, what aspects of the task predict the eventual performance of the model? In NLP, systematic features of LM generalization to individual examples are well characterized, but systematic aspects of LM adaptability to new tasks are not nearly as well understood. We present a large-scale empirical study of the features and limits of LM adaptability using a new benchmark, TaskBench500, built from 500 procedurally generated sequence modeling tasks. These tasks combine core aspects of language processing, including lexical semantics, sequence processing, memorization, logical reasoning, and world knowledge. Using TaskBench500, we evaluate three facets of adaptability, finding that: ...
Modern neural language models that are widely used in various NLP tasks risk memorizing sensitive in...
Language models, given their black-box nature, often exhibit sensitivity to input perturbations, lea...
In recent years, language models (LMs) have made remarkable progress in advancing the field of natu...
Why can pre-trained language models (PLMs) learn universal representations and effectively adapt to ...
Recent advances in NLP are brought by a range of large-scale pretrained language models (PLMs). Thes...
Transformer-based pre-trained models with millions of parameters require large storage. Recent appro...
Pretrained language models (PTLMs) are typically learned over a large, static corpus and further fin...
Large language models have exhibited emergent abilities, demonstrating exceptional performance acros...
Pre-training language models (LMs) on large-scale unlabeled text data makes the model much easier to...
Transfer learning from large language models (LLMs) has emerged as a powerful technique to enable kn...
There are growing interests in adapting large-scale language models using parameter-efficient fine-t...
Recent work has shown that fine-tuning large pre-trained language models on a collection of tasks de...
Keeping the performance of language technologies optimal as time passes is of great practical intere...
It is today acknowledged that neural network language models outperform backoff language models in a...
Over-paramaterized neural models have become dominant in Natural Language Processing. Increasing the...
Modern neural language models that are widely used in various NLP tasks risk memorizing sensitive in...
Language models, given their black-box nature, often exhibit sensitivity to input perturbations, lea...
In recent years, language models (LMs) have made remarkable progress in advancing the field of natu...
Why can pre-trained language models (PLMs) learn universal representations and effectively adapt to ...
Recent advances in NLP are brought by a range of large-scale pretrained language models (PLMs). Thes...
Transformer-based pre-trained models with millions of parameters require large storage. Recent appro...
Pretrained language models (PTLMs) are typically learned over a large, static corpus and further fin...
Large language models have exhibited emergent abilities, demonstrating exceptional performance acros...
Pre-training language models (LMs) on large-scale unlabeled text data makes the model much easier to...
Transfer learning from large language models (LLMs) has emerged as a powerful technique to enable kn...
There are growing interests in adapting large-scale language models using parameter-efficient fine-t...
Recent work has shown that fine-tuning large pre-trained language models on a collection of tasks de...
Keeping the performance of language technologies optimal as time passes is of great practical intere...
It is today acknowledged that neural network language models outperform backoff language models in a...
Over-paramaterized neural models have become dominant in Natural Language Processing. Increasing the...
Modern neural language models that are widely used in various NLP tasks risk memorizing sensitive in...
Language models, given their black-box nature, often exhibit sensitivity to input perturbations, lea...
In recent years, language models (LMs) have made remarkable progress in advancing the field of natu...