We propose a hypothesis only baseline for diagnosing Natural Language Inference (NLI). Especially when an NLI dataset assumes inference is occurring based purely on the relationship between a context and a hypothesis, it follows that assessing entailment relations while ignoring the provided context is a degenerate solution. Yet, through experiments on 10 distinct NLI datasets, we find that this approach, which we refer to as a hypothesis-only model, is able to significantly outperform a majority-class baseline across a number of NLI datasets. Our analysis suggests that statistical irregularities may allow a model to perform NLI in some datasets beyond what should be achievable without access to the context
Natural Language Inference (NLI) models are known to learn from biases and artefacts within their tr...
Success in natural language inference (NLI) should require a model to understand both lexical and co...
Natural Language Inference (NLI) models are known to learn from biases and artefacts within their tr...
When strong partial-input baselines reveal artifacts in crowdsourced NLI datasets, the performance o...
Neural network models have been very successful in natural language inference, with the best models ...
Natural Language Inference (NLI) datasets contain annotation artefacts resulting in spurious correla...
Natural Language Inference (NLI) research involves the development of models that can mimic human in...
Article presents a new benchmark for natural language inference in which negation plays a critical r...
Do state-of-the-art models for language understanding already have, or can they easily learn, abilit...
Natural language inference (NLI) is one of the most important natural language understanding (NLU) t...
It has been shown that NLI models are usually biased with respect to the word-overlap between premis...
We present a large-scale collection of diverse natural language inference (NLI) datasets that help p...
Recent studies have shown that strong Natural Language Understanding (NLU) models are prone to relyi...
Natural language inference (NLI) datasets (e.g., MultiNLI) were collected by soliciting hypotheses f...
This dissertation investigates the mechanism of language acquisition given the boundary conditions p...
Natural Language Inference (NLI) models are known to learn from biases and artefacts within their tr...
Success in natural language inference (NLI) should require a model to understand both lexical and co...
Natural Language Inference (NLI) models are known to learn from biases and artefacts within their tr...
When strong partial-input baselines reveal artifacts in crowdsourced NLI datasets, the performance o...
Neural network models have been very successful in natural language inference, with the best models ...
Natural Language Inference (NLI) datasets contain annotation artefacts resulting in spurious correla...
Natural Language Inference (NLI) research involves the development of models that can mimic human in...
Article presents a new benchmark for natural language inference in which negation plays a critical r...
Do state-of-the-art models for language understanding already have, or can they easily learn, abilit...
Natural language inference (NLI) is one of the most important natural language understanding (NLU) t...
It has been shown that NLI models are usually biased with respect to the word-overlap between premis...
We present a large-scale collection of diverse natural language inference (NLI) datasets that help p...
Recent studies have shown that strong Natural Language Understanding (NLU) models are prone to relyi...
Natural language inference (NLI) datasets (e.g., MultiNLI) were collected by soliciting hypotheses f...
This dissertation investigates the mechanism of language acquisition given the boundary conditions p...
Natural Language Inference (NLI) models are known to learn from biases and artefacts within their tr...
Success in natural language inference (NLI) should require a model to understand both lexical and co...
Natural Language Inference (NLI) models are known to learn from biases and artefacts within their tr...