Modern language models based on deep artificial neural networks have achieved impressive progress in Natural Language Processing benchmarks and applications in the last few years. However, they have also shown to often fail to make robust human-like generalizations, and need massive amounts of data to reach state of the art performance (orders of magnitude more than that available to humans when they learn language). These advancements and limitations have made increasingly important to clarify which linguistic phenomena and generalizations they actually learn. A line of research has emerged on the fine-grained targeted linguistic evaluations of neural language models, in which the targeted syntactic evaluation approach in one of the main o...
Since language models are used to model a wide variety of languages, it is natural to ask whether th...
In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the trans...
In he last few years, the analysis of the inner workings of state-of-the-art Neural Language Models ...
The outstanding performance recently reached by Neural Language Models (NLMs) across many Natural La...
In the last few years, pre-trained neural architectures have provided impressive improvements across...
In the last few years, pre-trained neural architectures have provided impressive improvements across...
In the last few years, pre-trained neural architectures have provided impressive improvements across...
The recent success of deep learning neural language models such as Bidirectional Encoder Representat...
In the last few years, pre-trained neural architectures have provided impressive improvements across...
The recent success of deep learning neural language models such as Bidirectional Encoder Representat...
In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the trans...
In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the trans...
In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the trans...
In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the trans...
In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the trans...
Since language models are used to model a wide variety of languages, it is natural to ask whether th...
In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the trans...
In he last few years, the analysis of the inner workings of state-of-the-art Neural Language Models ...
The outstanding performance recently reached by Neural Language Models (NLMs) across many Natural La...
In the last few years, pre-trained neural architectures have provided impressive improvements across...
In the last few years, pre-trained neural architectures have provided impressive improvements across...
In the last few years, pre-trained neural architectures have provided impressive improvements across...
The recent success of deep learning neural language models such as Bidirectional Encoder Representat...
In the last few years, pre-trained neural architectures have provided impressive improvements across...
The recent success of deep learning neural language models such as Bidirectional Encoder Representat...
In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the trans...
In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the trans...
In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the trans...
In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the trans...
In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the trans...
Since language models are used to model a wide variety of languages, it is natural to ask whether th...
In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the trans...
In he last few years, the analysis of the inner workings of state-of-the-art Neural Language Models ...