The diversity and Zipfian frequency distribution of natural language predicates in corpora leads to sparsity in Entailment Graphs (EGs) built by Open Relation Extraction (ORE). EGs are computationally efficient and explainable models of natural language inference, but as symbolic models, they fail if a novel premise or hypothesis vertex is missing at test-time. We present theory and methodology for overcoming such sparsity in symbolic models. First, we introduce a theory of optimal smoothing of EGs by constructing transitive chains. We then demonstrate an efficient, open-domain, and unsupervised smoothing method using an off-the-shelf Language Model to find approximations of missing premise predicates. This improves recall by 25.1 and 16.3 ...
Disentanglement via mechanism sparsity was introduced recently as a principled approach to extract l...
The ability to reason with natural language is a fundamental prerequisite for many NLP tasks such as...
Identifying entailment relations between predicates is an important part of applied semantic inferen...
Typed entailment graphs try to learn the entailment relations between predicates from text and model...
Recognizing textual entailment and paraphrasing is critical to many core natural language processing...
Large language models have achieved high performance on various question answering (QA) benchmarks, ...
Answering complex queries over knowledge graphs (KG) is an important yet challenging task because of...
We examine the extent to which, in principle, linguistic graph representations can complement and im...
Learning the underlying casual structure, represented by Directed Acyclic Graphs (DAGs), of concerne...
The ability to draw inferences is core to semantics and the field of Natural Language Processing. A...
Scaling language models with more data, compute and parameters has driven significant progress in na...
Real-world Knowledge Graphs (KGs) often suffer from incompleteness, which limits their potential per...
講演日: 平成22年11月26日講演場所: 情報科学研究科大講義室L1Large scale graphical models naturally arise in many natural lang...
Humans exhibit garden path effects: When reading sentences that are temporarily structurally ambiguo...
Pretrained language models are expected to effectively map input text to a set of vectors while pres...
Disentanglement via mechanism sparsity was introduced recently as a principled approach to extract l...
The ability to reason with natural language is a fundamental prerequisite for many NLP tasks such as...
Identifying entailment relations between predicates is an important part of applied semantic inferen...
Typed entailment graphs try to learn the entailment relations between predicates from text and model...
Recognizing textual entailment and paraphrasing is critical to many core natural language processing...
Large language models have achieved high performance on various question answering (QA) benchmarks, ...
Answering complex queries over knowledge graphs (KG) is an important yet challenging task because of...
We examine the extent to which, in principle, linguistic graph representations can complement and im...
Learning the underlying casual structure, represented by Directed Acyclic Graphs (DAGs), of concerne...
The ability to draw inferences is core to semantics and the field of Natural Language Processing. A...
Scaling language models with more data, compute and parameters has driven significant progress in na...
Real-world Knowledge Graphs (KGs) often suffer from incompleteness, which limits their potential per...
講演日: 平成22年11月26日講演場所: 情報科学研究科大講義室L1Large scale graphical models naturally arise in many natural lang...
Humans exhibit garden path effects: When reading sentences that are temporarily structurally ambiguo...
Pretrained language models are expected to effectively map input text to a set of vectors while pres...
Disentanglement via mechanism sparsity was introduced recently as a principled approach to extract l...
The ability to reason with natural language is a fundamental prerequisite for many NLP tasks such as...
Identifying entailment relations between predicates is an important part of applied semantic inferen...