While large pre-trained language models (PLM) have shown their great skills at solving discriminative tasks, a significant gap remains when compared with humans for explanation-related tasks. Among them, explaining the reason why a statement is wrong (e.g., against commonsense) is incredibly challenging. The major difficulty is finding the conflict point, where the statement contradicts our real world. This paper proposes Neon, a two-phrase, unsupervised explanation generation framework. Neon first generates corrected instantiations of the statement (phase I), then uses them to prompt large PLMs to find the conflict point and complete the explanation (phase II). We conduct extensive experiments on two standard explanation benchmarks, i.e.,...
Argumentation mining regards an advanced form of human language understanding by the machine. This i...
Model interpretability methods are often used to explain NLP model decisions on tasks such as text c...
As deep learning models become increasingly complex, practitioners are relying more on post hoc expl...
While large pre-trained language models (PLM) have shown their great skills at solving discriminativ...
How can prompting a large language model like GPT-3 with explanations improve in-context learning? W...
Natural language explanations (NLEs) are a special form of data annotation in which annotators ident...
Explanations shed light on a machine learning model's rationales and can aid in identifying deficien...
To increase trust in artificial intelligence systems, a promising research direction consists of des...
As the demand for explainable deep learning grows in the evaluation of language technologies, the va...
XAI with natural language processing aims to produce human-readable explanations as evidence for AI ...
XAI with natural language processing aims to produce human-readable explanations as evidence for AI ...
Abstract. A Markov Decision Process (MDP) policy presents, for each state, an action, which preferab...
Natural Language Inference (NLI) models are known to learn from biases and artefacts within their tr...
Natural Language Inference (NLI) models are known to learn from biases and artefacts within their tr...
A significant drawback of eXplainable Artificial Intelligence (XAI) approaches is the assumption of ...
Argumentation mining regards an advanced form of human language understanding by the machine. This i...
Model interpretability methods are often used to explain NLP model decisions on tasks such as text c...
As deep learning models become increasingly complex, practitioners are relying more on post hoc expl...
While large pre-trained language models (PLM) have shown their great skills at solving discriminativ...
How can prompting a large language model like GPT-3 with explanations improve in-context learning? W...
Natural language explanations (NLEs) are a special form of data annotation in which annotators ident...
Explanations shed light on a machine learning model's rationales and can aid in identifying deficien...
To increase trust in artificial intelligence systems, a promising research direction consists of des...
As the demand for explainable deep learning grows in the evaluation of language technologies, the va...
XAI with natural language processing aims to produce human-readable explanations as evidence for AI ...
XAI with natural language processing aims to produce human-readable explanations as evidence for AI ...
Abstract. A Markov Decision Process (MDP) policy presents, for each state, an action, which preferab...
Natural Language Inference (NLI) models are known to learn from biases and artefacts within their tr...
Natural Language Inference (NLI) models are known to learn from biases and artefacts within their tr...
A significant drawback of eXplainable Artificial Intelligence (XAI) approaches is the assumption of ...
Argumentation mining regards an advanced form of human language understanding by the machine. This i...
Model interpretability methods are often used to explain NLP model decisions on tasks such as text c...
As deep learning models become increasingly complex, practitioners are relying more on post hoc expl...