Humans understand language by extracting information (meaning) from sentences, combining it with existing commonsense knowledge, and then performing reasoning to draw conclusions. While large language models (LLMs) such as GPT-3 and ChatGPT are able to leverage patterns in the text to solve a variety of NLP tasks, they fall short in problems that require reasoning. They also cannot reliably explain the answers generated for a given question. In order to emulate humans better, we propose STAR, a framework that combines LLMs with Answer Set Programming (ASP). We show how LLMs can be used to effectively extract knowledge -- represented as predicates -- from language. Goal-directed ASP is then employed to reliably reason over this knowledge. We...
We address the challenging task of computational natural language inference, by which we mean bridgi...
Despite recent success in large language model (LLM) reasoning, LLMs struggle with hierarchical mult...
Thesis (Ph.D.)--University of Washington, 2020For machines to understand language, they must intuiti...
abstract: Significance of real-world knowledge for Natural Language Understanding(NLU) is well-known...
Natural language understanding (NLU) of text is a fundamental challenge in AI, and it has received s...
Large language models (LLMs) have shown remarkable reasoning capabilities given chain-of-thought pro...
Natural language understanding (NLU) of text is a fundamental challenge in AI, and it has received s...
While recent advancements in large language models (LLMs) bring us closer to achieving artificial ge...
Given a text, several questions can be asked. For some of these questions, the answer can be directl...
Alongside huge volumes of research on deep learning models in NLP in the recent years, there has bee...
Logical reasoning consistently plays a fundamental and significant role in the domains of knowledge ...
Recent developments in large language models (LLMs) have shown promise in enhancing the capabilities...
Contextualized representations trained over large raw text data have given remarkable improvements f...
Most controlled natural languages (CNLs) are processed with the help of a pipeline architecture that...
Recently, large language models (LLMs), including notable models such as GPT-4 and burgeoning commun...
We address the challenging task of computational natural language inference, by which we mean bridgi...
Despite recent success in large language model (LLM) reasoning, LLMs struggle with hierarchical mult...
Thesis (Ph.D.)--University of Washington, 2020For machines to understand language, they must intuiti...
abstract: Significance of real-world knowledge for Natural Language Understanding(NLU) is well-known...
Natural language understanding (NLU) of text is a fundamental challenge in AI, and it has received s...
Large language models (LLMs) have shown remarkable reasoning capabilities given chain-of-thought pro...
Natural language understanding (NLU) of text is a fundamental challenge in AI, and it has received s...
While recent advancements in large language models (LLMs) bring us closer to achieving artificial ge...
Given a text, several questions can be asked. For some of these questions, the answer can be directl...
Alongside huge volumes of research on deep learning models in NLP in the recent years, there has bee...
Logical reasoning consistently plays a fundamental and significant role in the domains of knowledge ...
Recent developments in large language models (LLMs) have shown promise in enhancing the capabilities...
Contextualized representations trained over large raw text data have given remarkable improvements f...
Most controlled natural languages (CNLs) are processed with the help of a pipeline architecture that...
Recently, large language models (LLMs), including notable models such as GPT-4 and burgeoning commun...
We address the challenging task of computational natural language inference, by which we mean bridgi...
Despite recent success in large language model (LLM) reasoning, LLMs struggle with hierarchical mult...
Thesis (Ph.D.)--University of Washington, 2020For machines to understand language, they must intuiti...