International audienceEXplainable AI (XAI) offers a wide range of algorithmic solutions to the problem of AI's opacity, but ensuring of their usefulness remains a challenge. In this study, we propose an multi-explanation XAI system using surrogate rules, LIME and nearest neighbor on a random forest. Through an experiment in an e-sports prediction task, we demonstrate the feasibility and measure the usefulness of working with multiple forms of explanation. Considering users' preferences, we offer new perspectives for XAI design and evaluation, highlighting the concept of data difficulty and of the idea of prior agreement between users and AI
Explainable Artificial Intelligence (XAI) methods form a large portfolio of different frameworks and...
Given the pressing need for assuring algorithmic transparency, Explainable AI (XAI) has emerged as o...
During the last few years the topic explainable artificial intelligence (XAI) has become a hotspot i...
International audienceEXplainable AI (XAI) offers a wide range of algorithmic solutions to the probl...
International audienceEXplainable AI (XAI) was created to address the issue of Machine Learning's la...
International audienceExplanations do not affect human accuracy on an e-sport prediction task, even ...
Artificial Intelligence (AI) now depends on black box machine learning (ML) models which lack algori...
Artificial Intelligence is increasingly driven by powerful but often opaque machine learning algorit...
The most effective Artificial Intelligence (AI) systems exploit complex machine learning models to f...
To foster usefulness and accountability of machine learning (ML), it is essential to explain a model...
Introduction: To foster usefulness and accountability of machine learning (ML), it is essential to e...
In the last years, artificial intelligence and machine learning algorithms are rising in importance ...
Explainable AI (XAI) is a research field dedicated to formulating avenues of breaching the black box...
A key impediment to the use of AI is the lacking of transparency, especially in safety/security crit...
Explainable machine learning provides tools to better understand predictive models and their decisio...
Explainable Artificial Intelligence (XAI) methods form a large portfolio of different frameworks and...
Given the pressing need for assuring algorithmic transparency, Explainable AI (XAI) has emerged as o...
During the last few years the topic explainable artificial intelligence (XAI) has become a hotspot i...
International audienceEXplainable AI (XAI) offers a wide range of algorithmic solutions to the probl...
International audienceEXplainable AI (XAI) was created to address the issue of Machine Learning's la...
International audienceExplanations do not affect human accuracy on an e-sport prediction task, even ...
Artificial Intelligence (AI) now depends on black box machine learning (ML) models which lack algori...
Artificial Intelligence is increasingly driven by powerful but often opaque machine learning algorit...
The most effective Artificial Intelligence (AI) systems exploit complex machine learning models to f...
To foster usefulness and accountability of machine learning (ML), it is essential to explain a model...
Introduction: To foster usefulness and accountability of machine learning (ML), it is essential to e...
In the last years, artificial intelligence and machine learning algorithms are rising in importance ...
Explainable AI (XAI) is a research field dedicated to formulating avenues of breaching the black box...
A key impediment to the use of AI is the lacking of transparency, especially in safety/security crit...
Explainable machine learning provides tools to better understand predictive models and their decisio...
Explainable Artificial Intelligence (XAI) methods form a large portfolio of different frameworks and...
Given the pressing need for assuring algorithmic transparency, Explainable AI (XAI) has emerged as o...
During the last few years the topic explainable artificial intelligence (XAI) has become a hotspot i...