This thesis addresses the problem of learning concept descriptions that are interpretable, or explainable. Explainability is understood as the ability to justify the learned concept in terms of the existing background knowledge. The starting point for the work was an existing system that would induce only fully explainable rules. The system performed well when the model used during induction was complete and correct. In practice, however, models are likely to be imperfect, i.e. incomplete and incorrect. We report here a new approach that achieves explainability with imperfect models. The basis of the system is the standard inductive search driven by an accuracy-oriented heuristic, biased towards rule explainability. The bias is abandoned wh...
The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on the...
This paper presents a method for using qualitative models to guide inductive learning. Our objective...
International audienceWe investigate here concept learning from incomplete examples, denoted here as...
The explainability of a model has been a topic of debate. Some research states explainability is unn...
Explanations shed light on a machine learning model's rationales and can aid in identifying deficien...
This paper presents a scheme for learning complex descriptions, such as logic formulas, from example...
We take inspiration from the study of human explanation to inform the design and evaluation of inter...
The attempt to concretely define the concept of explainability in terms of other vaguely described n...
A transformation of descriptions of concepts without a change of meaning makes sense under a heterog...
It is increasingly apparent that knowledge is essential for intelligent behavior. This has led to a ...
Explainable AI was born as a pathway to allow humans to explore and understand the inner working of ...
Recent work on interpretability in machine learning and AI has focused on the building of simplified...
As deep learning methods have obtained tremendous success over the years, our understanding of these...
We investigate here concept learning from incomplete examples. Our first purpose is to discuss to wh...
Recent work on interpretability in machine learning and AI has focused on the building of simplified...
The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on the...
This paper presents a method for using qualitative models to guide inductive learning. Our objective...
International audienceWe investigate here concept learning from incomplete examples, denoted here as...
The explainability of a model has been a topic of debate. Some research states explainability is unn...
Explanations shed light on a machine learning model's rationales and can aid in identifying deficien...
This paper presents a scheme for learning complex descriptions, such as logic formulas, from example...
We take inspiration from the study of human explanation to inform the design and evaluation of inter...
The attempt to concretely define the concept of explainability in terms of other vaguely described n...
A transformation of descriptions of concepts without a change of meaning makes sense under a heterog...
It is increasingly apparent that knowledge is essential for intelligent behavior. This has led to a ...
Explainable AI was born as a pathway to allow humans to explore and understand the inner working of ...
Recent work on interpretability in machine learning and AI has focused on the building of simplified...
As deep learning methods have obtained tremendous success over the years, our understanding of these...
We investigate here concept learning from incomplete examples. Our first purpose is to discuss to wh...
Recent work on interpretability in machine learning and AI has focused on the building of simplified...
The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on the...
This paper presents a method for using qualitative models to guide inductive learning. Our objective...
International audienceWe investigate here concept learning from incomplete examples, denoted here as...