The opaqueness of deep neural networks hinders their employment in safety-critical applications. This issue is driving the research community to focus on eXplainable Artificial Intelligence (XAI) techniques. XAI algorithms can be categorized into two types: those that explain the predictions of black-box models and those that create interpretable models from the start. While interpretable models foster user trust, their performance is normally inferior to traditional black-box models like neural networks. To fill this gap, this chapter presents an extensive framework introducing special neural networks known as Logic Explained Networks (LENs). The most notable advantage of this approach is that LENs achieve performances that are comparable ...
Despite AI and Neural Networks model had an overwhelming evolution during the past decade, their app...
This paper describes a neural network design using auxiliary inputs, namely the indicators, that act...
Despite the rapid growth in attention on eXplainable AI (XAI) of late, explanations in the literatur...
The large and still increasing popularity of deep learning clashes with a major limit of neural netw...
International audienceRecently, Logic Explained Networks (LENs) have been proposed as explainable-by...
We investigate the potential of Neural-Symbolic integration to reason about what a neural network ha...
Deep neural networks are usually considered black-boxes due to their complex internal architecture, ...
Research on Deep Learning has achieved remarkable results in recent years, mainly thanks to the com...
We discuss the impact of presenting explanations to people for Artificial Intelligence (AI) decision...
In recent decades, artificial intelligence (AI) systems are becoming increasingly ubiquitous from lo...
As machine learning models gain traction in real world applications, user demand for transparent res...
Explanation is an important function in symbolic artificial intelligence (AI). For instance, explana...
Safety-critical applications (e.g., autonomous vehicles, human-machine teaming, and automated medica...
Accepted at IJCAI19 Neural-Symbolic Learning and Reasoning Workshop (https://sites.google.com/view/n...
The most effective Artificial Intelligence (AI) systems exploit complex machine learning models to f...
Despite AI and Neural Networks model had an overwhelming evolution during the past decade, their app...
This paper describes a neural network design using auxiliary inputs, namely the indicators, that act...
Despite the rapid growth in attention on eXplainable AI (XAI) of late, explanations in the literatur...
The large and still increasing popularity of deep learning clashes with a major limit of neural netw...
International audienceRecently, Logic Explained Networks (LENs) have been proposed as explainable-by...
We investigate the potential of Neural-Symbolic integration to reason about what a neural network ha...
Deep neural networks are usually considered black-boxes due to their complex internal architecture, ...
Research on Deep Learning has achieved remarkable results in recent years, mainly thanks to the com...
We discuss the impact of presenting explanations to people for Artificial Intelligence (AI) decision...
In recent decades, artificial intelligence (AI) systems are becoming increasingly ubiquitous from lo...
As machine learning models gain traction in real world applications, user demand for transparent res...
Explanation is an important function in symbolic artificial intelligence (AI). For instance, explana...
Safety-critical applications (e.g., autonomous vehicles, human-machine teaming, and automated medica...
Accepted at IJCAI19 Neural-Symbolic Learning and Reasoning Workshop (https://sites.google.com/view/n...
The most effective Artificial Intelligence (AI) systems exploit complex machine learning models to f...
Despite AI and Neural Networks model had an overwhelming evolution during the past decade, their app...
This paper describes a neural network design using auxiliary inputs, namely the indicators, that act...
Despite the rapid growth in attention on eXplainable AI (XAI) of late, explanations in the literatur...