Several studies investigated the linguistic information implicitly encoded in Neural Language Models. Most of these works focused on quantifying the amount and type of information available within their internal representations and across their layers. In line with this scenario, we proposed a different study, based on Lasso regression, aimed at understanding how the information encoded by BERT sentence-level representations is arranged within its hidden units. Using a suite of several probing tasks, we showed the existence of a relationship between the implicit knowledge learned by the model and the number of individual units involved in the encodings of this competence. Moreover, we found that it is possible to identify groups of hidden u...
Recent studies (Blevins et al. 2018, Tenney et al. 2019, etc) have presented evidence that linguisti...
Pretrained language models (PLMs) form the basis of most state-of-the-art NLP technologies. Neverthe...
Large pre-trained language models such as BERT have been the driving force behind recent improvement...
Several studies investigated the linguistic information implicitly encoded in Neural Language Models...
In this paper we present a comparison between the linguistic knowledge encoded in the internal repre...
International audienceBERT is a recent language representation model that has surprisingly performed...
Classifiers trained on auxiliary probing tasks are a popular tool to analyze the representations lea...
The adaptation of pretrained language models to solve supervised tasks has become a baseline in NLP,...
The outstanding performance recently reached by Neural Language Models (NLMs) across many Natural La...
Language models have become nearly ubiquitous in natural language processing applications achieving ...
While most theories regarding the various aspects of human language are couched in the language of d...
In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the trans...
Peeking into the inner workings of BERT has shown that its layers resemble the classical NLP pipelin...
Recent progress in pretraining language models on large textual corpora led to a surge of improvemen...
Contextualized word embeddings, i.e. vector representations for words in context, are naturally seen...
Recent studies (Blevins et al. 2018, Tenney et al. 2019, etc) have presented evidence that linguisti...
Pretrained language models (PLMs) form the basis of most state-of-the-art NLP technologies. Neverthe...
Large pre-trained language models such as BERT have been the driving force behind recent improvement...
Several studies investigated the linguistic information implicitly encoded in Neural Language Models...
In this paper we present a comparison between the linguistic knowledge encoded in the internal repre...
International audienceBERT is a recent language representation model that has surprisingly performed...
Classifiers trained on auxiliary probing tasks are a popular tool to analyze the representations lea...
The adaptation of pretrained language models to solve supervised tasks has become a baseline in NLP,...
The outstanding performance recently reached by Neural Language Models (NLMs) across many Natural La...
Language models have become nearly ubiquitous in natural language processing applications achieving ...
While most theories regarding the various aspects of human language are couched in the language of d...
In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the trans...
Peeking into the inner workings of BERT has shown that its layers resemble the classical NLP pipelin...
Recent progress in pretraining language models on large textual corpora led to a surge of improvemen...
Contextualized word embeddings, i.e. vector representations for words in context, are naturally seen...
Recent studies (Blevins et al. 2018, Tenney et al. 2019, etc) have presented evidence that linguisti...
Pretrained language models (PLMs) form the basis of most state-of-the-art NLP technologies. Neverthe...
Large pre-trained language models such as BERT have been the driving force behind recent improvement...