We discuss the impact of presenting explanations to people for Artificial Intelligence (AI) decisions powered by Neural Networks, according to three types of logical reasoning (inductive, deductive, and abductive). We start from examples in the existing literature on explaining artificial neural networks. We see that abductive reasoning is (unintentionally) the most commonly used as default in user testing for comparing the quality of explanation techniques. We discuss whether this may be because this reasoning type balances the technical challenges of generating the explanations, and the effectiveness of the explanations. Also, by illustrating how the original (abductive) explanation can be converted into the remaining two reasoning types ...
Despite the rapid growth in attention on eXplainable AI (XAI) of late, explanations in the literatur...
This paper describes a neural network design using auxiliary inputs, namely the indicators, that act...
What Deep Neural Networks (DNNs) can do is impressive, yet they are notoriously opaque. Responding t...
We discuss the impact of presenting explanations to people for Artificial Intelligence (AI) decision...
The large and still increasing popularity of deep learning clashes with a major limit of neural netw...
We investigate the potential of Neural-Symbolic integration to reason about what a neural network ha...
The opaqueness of deep neural networks hinders their employment in safety-critical applications. Thi...
The most effective Artificial Intelligence (AI) systems exploit complex machine learning models to f...
Issues regarding explainable AI involve four components: users, laws and regulations, explanations a...
Deep neural networks (DNNs), a particularly effective type of artificial intelligence, currently lac...
In recent decades, artificial intelligence (AI) systems are becoming increasingly ubiquitous from lo...
Recent work on interpretability in machine learning and AI has focused on the building of simplified...
Accepted at IJCAI19 Neural-Symbolic Learning and Reasoning Workshop (https://sites.google.com/view/n...
There is broad agreement that Artificial Intelligence (AI) systems, particularly those using Machine...
Explanation is an important function in symbolic artificial intelligence (AI). For instance, explana...
Despite the rapid growth in attention on eXplainable AI (XAI) of late, explanations in the literatur...
This paper describes a neural network design using auxiliary inputs, namely the indicators, that act...
What Deep Neural Networks (DNNs) can do is impressive, yet they are notoriously opaque. Responding t...
We discuss the impact of presenting explanations to people for Artificial Intelligence (AI) decision...
The large and still increasing popularity of deep learning clashes with a major limit of neural netw...
We investigate the potential of Neural-Symbolic integration to reason about what a neural network ha...
The opaqueness of deep neural networks hinders their employment in safety-critical applications. Thi...
The most effective Artificial Intelligence (AI) systems exploit complex machine learning models to f...
Issues regarding explainable AI involve four components: users, laws and regulations, explanations a...
Deep neural networks (DNNs), a particularly effective type of artificial intelligence, currently lac...
In recent decades, artificial intelligence (AI) systems are becoming increasingly ubiquitous from lo...
Recent work on interpretability in machine learning and AI has focused on the building of simplified...
Accepted at IJCAI19 Neural-Symbolic Learning and Reasoning Workshop (https://sites.google.com/view/n...
There is broad agreement that Artificial Intelligence (AI) systems, particularly those using Machine...
Explanation is an important function in symbolic artificial intelligence (AI). For instance, explana...
Despite the rapid growth in attention on eXplainable AI (XAI) of late, explanations in the literatur...
This paper describes a neural network design using auxiliary inputs, namely the indicators, that act...
What Deep Neural Networks (DNNs) can do is impressive, yet they are notoriously opaque. Responding t...