Success in natural language inference (NLI) should require a model to understand both lexical and compositional semantics. However, through adversarial evaluation, we find that several state-of-the-art models with diverse architectures are over-relying on the former and fail to use the latter. Further, this compositionality unawareness is not reflected via standard evaluation on current datasets. We show that removing RNNs in existing models or shuffling input words during training does not induce large performance loss despite the explicit removal of compositional information. Therefore, we propose a compositionality-sensitivity testing setup that analyzes models on natural examples from existing datasets that cannot be solved via lexical ...
Many believe human-level natural language inference (NLI) has already been achieved. In reality, mod...
Natural language inference (NLI) datasets (e.g., MultiNLI) were collected by soliciting hypotheses f...
Natural language inference (NLI) models are susceptible to learning shortcuts, i.e. decision rules t...
Natural Language Inference is a challenging task that has received substantial attention, and state-...
Natural Language Inference is a challenging task that has received substantial attention, and state-...
Do state-of-the-art models for language understanding already have, or can they easily learn, abilit...
Deep transformer models have pushed performance on NLP tasks to new limits, suggesting sophisticated...
Natural Language Inference (NLI) datasets contain annotation artefacts resulting in spurious correla...
Natural Language Inference (NLI) models are known to learn from biases and artefacts within their tr...
Natural Language Inference (NLI) plays an important role in many natural language processing tasks s...
Nature language inference (NLI) task is a predictive task of determining the inference relationship ...
The focus of this thesis is to incorporate linguistic theories of semantics into data-driven models ...
Thesis (Master's)--University of Washington, 2021This paper investigates whether biasing natural lan...
Natural language inference (NLI) is one of the most important natural language understanding (NLU) t...
Non-compositional phrases such as `red herring' and weakly compositional phrases such as `spelling b...
Many believe human-level natural language inference (NLI) has already been achieved. In reality, mod...
Natural language inference (NLI) datasets (e.g., MultiNLI) were collected by soliciting hypotheses f...
Natural language inference (NLI) models are susceptible to learning shortcuts, i.e. decision rules t...
Natural Language Inference is a challenging task that has received substantial attention, and state-...
Natural Language Inference is a challenging task that has received substantial attention, and state-...
Do state-of-the-art models for language understanding already have, or can they easily learn, abilit...
Deep transformer models have pushed performance on NLP tasks to new limits, suggesting sophisticated...
Natural Language Inference (NLI) datasets contain annotation artefacts resulting in spurious correla...
Natural Language Inference (NLI) models are known to learn from biases and artefacts within their tr...
Natural Language Inference (NLI) plays an important role in many natural language processing tasks s...
Nature language inference (NLI) task is a predictive task of determining the inference relationship ...
The focus of this thesis is to incorporate linguistic theories of semantics into data-driven models ...
Thesis (Master's)--University of Washington, 2021This paper investigates whether biasing natural lan...
Natural language inference (NLI) is one of the most important natural language understanding (NLU) t...
Non-compositional phrases such as `red herring' and weakly compositional phrases such as `spelling b...
Many believe human-level natural language inference (NLI) has already been achieved. In reality, mod...
Natural language inference (NLI) datasets (e.g., MultiNLI) were collected by soliciting hypotheses f...
Natural language inference (NLI) models are susceptible to learning shortcuts, i.e. decision rules t...