Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models, in zero-shot evaluation on various downstream language reasoning tasks. Since these improvements are reported in aggregate, however, little is known about (i) how to select the appropriate knowledge for solid performance across tasks, (ii) how to combine this knowledge with neural language models, and (iii) how these pairings affect granular task performance. In this paper, we study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models. We study the effect of different synthetic datasets on language models with various archite...
Recent progress in NLP witnessed the development of large-scale pre-trained language models (GPT, BE...
The common practice for training commonsense models has gone from-human-to-corpus-to-machine: humans...
Knowledge resources, e.g. knowledge graphs, which formally represent essential semantics and informa...
Recent developments in pre-trained neural language modeling have led to leaps in accuracy on common-...
Neural language models have drastically changed the landscape of natural language processing (NLP). ...
Modern language models are strong at generating grammatically correct, natural lan- guage. However, ...
A fundamental question in natural language processing is - what kind of language structure and seman...
Combining structured information with language models is a standing problem in NLP. Building on prev...
Over-paramaterized neural models have become dominant in Natural Language Processing. Increasing the...
Representational spaces learned via language modeling are fundamental to Natural Language Processing...
Contextualized representations trained over large raw text data have given remarkable improvements f...
Thesis (Ph.D.)--University of Washington, 2023Language models (LMs) are at the core of almost all st...
Thesis (Ph.D.)--University of Washington, 2022A robust language processing machine should be able to...
Despite advances in deep learning and knowledge graphs (KGs), using language models for natural lang...
It remains an open question whether incorporating external knowledge benefits commonsense reasoning ...
Recent progress in NLP witnessed the development of large-scale pre-trained language models (GPT, BE...
The common practice for training commonsense models has gone from-human-to-corpus-to-machine: humans...
Knowledge resources, e.g. knowledge graphs, which formally represent essential semantics and informa...
Recent developments in pre-trained neural language modeling have led to leaps in accuracy on common-...
Neural language models have drastically changed the landscape of natural language processing (NLP). ...
Modern language models are strong at generating grammatically correct, natural lan- guage. However, ...
A fundamental question in natural language processing is - what kind of language structure and seman...
Combining structured information with language models is a standing problem in NLP. Building on prev...
Over-paramaterized neural models have become dominant in Natural Language Processing. Increasing the...
Representational spaces learned via language modeling are fundamental to Natural Language Processing...
Contextualized representations trained over large raw text data have given remarkable improvements f...
Thesis (Ph.D.)--University of Washington, 2023Language models (LMs) are at the core of almost all st...
Thesis (Ph.D.)--University of Washington, 2022A robust language processing machine should be able to...
Despite advances in deep learning and knowledge graphs (KGs), using language models for natural lang...
It remains an open question whether incorporating external knowledge benefits commonsense reasoning ...
Recent progress in NLP witnessed the development of large-scale pre-trained language models (GPT, BE...
The common practice for training commonsense models has gone from-human-to-corpus-to-machine: humans...
Knowledge resources, e.g. knowledge graphs, which formally represent essential semantics and informa...