International audienceRecently, Logic Explained Networks (LENs) have been proposed as explainable-by-design neural models providing logic explanations for their predictions. However, these models have only been applied to vision and tabular data, and they mostly favour the generation of global explanations, while local ones tend to be noisy and verbose. For these reasons, we propose LEN p , improving local explanations by perturbing input words, and we test it on text classification. Our results show that (i) LEN p provides better local explanations than LIME in terms of sensitivity and faithfulness, and (ii) logic explanations are more useful and user-friendly than feature scoring provided by LIME as attested by a human survey
Numerical tables are widely employed to communicate or report the classification performance of mach...
Over the years, we have seen the development and success of modern deep learningmodels, which learn ...
The thesis tackles two problems in the recently-born field of Explainable AI (XAI), and proposes som...
The opaqueness of deep neural networks hinders their employment in safety-critical applications. Thi...
The large and still increasing popularity of deep learning clashes with a major limit of neural netw...
The behavior of deep neural networks (DNNs) is hard to understand. This makes it necessary to explor...
We propose an approach to faithfully explaining text classification models, using a specifically des...
We build on abduction-based explanations for machine learning and develop a method for computing loc...
Deep neural networks are usually considered black-boxes due to their complex internal architecture, ...
Due to the black-box nature of deep learning models, methods for explaining the models’ results are ...
Recent years have witnessed increasing interests in developing interpretable models in Natural Langu...
Machine reading comprehension has aroused wide concerns, since it explores the potential of model fo...
Explaining the decisions of a Deep Learning Network is imperative to safeguard end-user trust. Such ...
Natural language explanations (NLEs) are a special form of data annotation in which annotators ident...
Research on Deep Learning has achieved remarkable results in recent years, mainly thanks to the com...
Numerical tables are widely employed to communicate or report the classification performance of mach...
Over the years, we have seen the development and success of modern deep learningmodels, which learn ...
The thesis tackles two problems in the recently-born field of Explainable AI (XAI), and proposes som...
The opaqueness of deep neural networks hinders their employment in safety-critical applications. Thi...
The large and still increasing popularity of deep learning clashes with a major limit of neural netw...
The behavior of deep neural networks (DNNs) is hard to understand. This makes it necessary to explor...
We propose an approach to faithfully explaining text classification models, using a specifically des...
We build on abduction-based explanations for machine learning and develop a method for computing loc...
Deep neural networks are usually considered black-boxes due to their complex internal architecture, ...
Due to the black-box nature of deep learning models, methods for explaining the models’ results are ...
Recent years have witnessed increasing interests in developing interpretable models in Natural Langu...
Machine reading comprehension has aroused wide concerns, since it explores the potential of model fo...
Explaining the decisions of a Deep Learning Network is imperative to safeguard end-user trust. Such ...
Natural language explanations (NLEs) are a special form of data annotation in which annotators ident...
Research on Deep Learning has achieved remarkable results in recent years, mainly thanks to the com...
Numerical tables are widely employed to communicate or report the classification performance of mach...
Over the years, we have seen the development and success of modern deep learningmodels, which learn ...
The thesis tackles two problems in the recently-born field of Explainable AI (XAI), and proposes som...