The recent explosion in question answering research produced a wealth of both factoid reading comprehension (RC) and commonsense reasoning datasets. Combining them presents a different kind of task: deciding not simply whether information is present in the text, but also whether a confident guess could be made for the missing information. We present QuAIL, the first RC dataset to combine text-based, world knowledge and unanswerable questions, and to provide question type annotation that would enable diagnostics of the reasoning strategies by a given QA system. QuAIL contains 15K multi-choice questions for 800 texts in 4 domains. Crucially, it offers both general and text-specific questions, unlikely to be found in pretraining data. We show ...
Large-scale knowledge in the artificial intelligence of things (AIoT) field urgently needs effective...
Question-answering datasets require a broad set of reasoning skills. We show how to use question dec...
Recent developments in pre-trained neural language modeling have led to leaps in accuracy on common-...
Intelligent interaction between humans and computers has been a dream of artificial intelligence sin...
Natural language has long been the most prominent tool for humans to disseminate, learn and create k...
abstract: One of the measures to determine the intelligence of a system is through Question Answerin...
Alongside huge volumes of research on deep learning models in NLP in the recent years, there has bee...
Natural language understanding (NLU) of text is a fundamental challenge in AI, and it has received s...
Automatic question answering (QA), which can greatly facilitate the access to information, is an imp...
By virtue of being prevalently written in natural language (NL), requirements are prone to various d...
Composing knowledge from multiple pieces of texts is a key challenge in multi-hop question answering...
Open-domain question answering (QA) is an emerging information-seeking paradigm, which automatically...
Question answering (QA) is one of the most important and challenging tasks for understanding human l...
Question Answering (QA) system is an automated approach to retrieve correct responses to the questio...
In the QA and information retrieval domains progress has been assessed via evaluation campaigns(Clef...
Large-scale knowledge in the artificial intelligence of things (AIoT) field urgently needs effective...
Question-answering datasets require a broad set of reasoning skills. We show how to use question dec...
Recent developments in pre-trained neural language modeling have led to leaps in accuracy on common-...
Intelligent interaction between humans and computers has been a dream of artificial intelligence sin...
Natural language has long been the most prominent tool for humans to disseminate, learn and create k...
abstract: One of the measures to determine the intelligence of a system is through Question Answerin...
Alongside huge volumes of research on deep learning models in NLP in the recent years, there has bee...
Natural language understanding (NLU) of text is a fundamental challenge in AI, and it has received s...
Automatic question answering (QA), which can greatly facilitate the access to information, is an imp...
By virtue of being prevalently written in natural language (NL), requirements are prone to various d...
Composing knowledge from multiple pieces of texts is a key challenge in multi-hop question answering...
Open-domain question answering (QA) is an emerging information-seeking paradigm, which automatically...
Question answering (QA) is one of the most important and challenging tasks for understanding human l...
Question Answering (QA) system is an automated approach to retrieve correct responses to the questio...
In the QA and information retrieval domains progress has been assessed via evaluation campaigns(Clef...
Large-scale knowledge in the artificial intelligence of things (AIoT) field urgently needs effective...
Question-answering datasets require a broad set of reasoning skills. We show how to use question dec...
Recent developments in pre-trained neural language modeling have led to leaps in accuracy on common-...