Determining the plausibility of causal relations between clauses is a commonsense reasoning task that requires complex inference ability. The general approach to this task is to train a large pretrained language model on a specific dataset. However, the available training data for the task is often scarce, which leads to instability of model training or reliance on the shallow features of the dataset. This paper presents a number of techniques for making models more robust in the domain of causal reasoning. Firstly, we perform adversarial training by generating perturbed inputs through synonym substitution. Secondly, based on a linguistic theory of discourse connectives, we perform data augmentation using a discourse parser for detecting ca...
Despite the super-human accuracy of recent deep models in NLP tasks, their robustness is reportedly ...
Causal models are playing an increasingly important role in machine learning, particularly in the re...
With recent advances in natural language processing, rationalization becomes an essential self-expla...
Previous studies have shown the efficacy of knowledge augmentation methods in pretrained language mo...
2011-10-26It has long been the vision of AI researchers to build systems that are able to learn and ...
The abundance of information on the internet has impacted the lives of people to a great extent. Peo...
The ability to learn and reason with causal knowledge is a key aspect of intelligent behavior. In co...
Recent studies have shown that strong Natural Language Understanding (NLU) models are prone to relyi...
This thesis mainly studies the causality in natural language processing. Understanding causality is ...
Discourse parsing is a popular technique widely used in text understanding, sentiment analysis, and ...
We present a memory-based approach to learning commonsense causal relations from episodic text. The ...
In this paper, we conduct the first study on spurious correlations for open-domain response generati...
The task of causal question answering aims to reason about causes and effects over a provided real o...
Contextualized representations trained over large raw text data have given remarkable improvements f...
Causal models are playing an increasingly important role in machine learning, particularly in the re...
Despite the super-human accuracy of recent deep models in NLP tasks, their robustness is reportedly ...
Causal models are playing an increasingly important role in machine learning, particularly in the re...
With recent advances in natural language processing, rationalization becomes an essential self-expla...
Previous studies have shown the efficacy of knowledge augmentation methods in pretrained language mo...
2011-10-26It has long been the vision of AI researchers to build systems that are able to learn and ...
The abundance of information on the internet has impacted the lives of people to a great extent. Peo...
The ability to learn and reason with causal knowledge is a key aspect of intelligent behavior. In co...
Recent studies have shown that strong Natural Language Understanding (NLU) models are prone to relyi...
This thesis mainly studies the causality in natural language processing. Understanding causality is ...
Discourse parsing is a popular technique widely used in text understanding, sentiment analysis, and ...
We present a memory-based approach to learning commonsense causal relations from episodic text. The ...
In this paper, we conduct the first study on spurious correlations for open-domain response generati...
The task of causal question answering aims to reason about causes and effects over a provided real o...
Contextualized representations trained over large raw text data have given remarkable improvements f...
Causal models are playing an increasingly important role in machine learning, particularly in the re...
Despite the super-human accuracy of recent deep models in NLP tasks, their robustness is reportedly ...
Causal models are playing an increasingly important role in machine learning, particularly in the re...
With recent advances in natural language processing, rationalization becomes an essential self-expla...