The aim of this project is to improve human decision-making using explainability; specifically, how to explain the (un)certainty of machine learning models. Prior research has used uncertainty measures to promote trust and decision-making. However, the direction of explaining why the AI prediction is confident (or not confident) in its prediction needs to be addressed. By explaining the model uncertainty, we can promote trust, improve understanding and improve decision-making for users
A common criteria for Explainable AI (XAI) is to support users in establishing appropriate trust in ...
International audienceMachine Learning models can output confident but incorrect predictions. To add...
International audienceMachine Learning models can output confident but incorrect predictions. To add...
Explainable AI provides insights to users into the why for model predictions, offering potential for...
Software-intensive systems that rely on machine learning (ML) and artificial intelligence (AI) are i...
“The use of Artificial Intelligence (AI) decision support systems is increasing in high-stakes conte...
Machine learning and artificial intelligence will be deeply embedded in the intelligent systems huma...
How and when can we depend on machine learning systems to make decisions for human-being? This is pr...
How and when can we depend on machine learning systems to make decisions for human-being? This is pr...
Uncertainty quantification can be broadly defined as the process of characterizing, estimating, prop...
AbstractResearch in automation in machine vision, robotic planning, medical diagnosis, and many othe...
Both uncertainty estimation and interpretability are important factors for trustworthy machine learn...
Reasoning with uncertain information has received a great deal of attention recently, as this issue ...
Deep learning, and in particular neural networks (NNs), have seen a surge in popularity over the pas...
Automated decision-making systems are increasingly being deployed in areas with high personal and so...
A common criteria for Explainable AI (XAI) is to support users in establishing appropriate trust in ...
International audienceMachine Learning models can output confident but incorrect predictions. To add...
International audienceMachine Learning models can output confident but incorrect predictions. To add...
Explainable AI provides insights to users into the why for model predictions, offering potential for...
Software-intensive systems that rely on machine learning (ML) and artificial intelligence (AI) are i...
“The use of Artificial Intelligence (AI) decision support systems is increasing in high-stakes conte...
Machine learning and artificial intelligence will be deeply embedded in the intelligent systems huma...
How and when can we depend on machine learning systems to make decisions for human-being? This is pr...
How and when can we depend on machine learning systems to make decisions for human-being? This is pr...
Uncertainty quantification can be broadly defined as the process of characterizing, estimating, prop...
AbstractResearch in automation in machine vision, robotic planning, medical diagnosis, and many othe...
Both uncertainty estimation and interpretability are important factors for trustworthy machine learn...
Reasoning with uncertain information has received a great deal of attention recently, as this issue ...
Deep learning, and in particular neural networks (NNs), have seen a surge in popularity over the pas...
Automated decision-making systems are increasingly being deployed in areas with high personal and so...
A common criteria for Explainable AI (XAI) is to support users in establishing appropriate trust in ...
International audienceMachine Learning models can output confident but incorrect predictions. To add...
International audienceMachine Learning models can output confident but incorrect predictions. To add...