Prediction methods can be augmented by local explanation methods (LEMs) to perform root cause analysis for individual observations. But while most recent research on LEMs focus on low-dimensional problems, real-world datasets commonly have hundreds or thousands of variables. Here, we investigate how LEMs perform for high-dimensional industrial applications. Seven prediction methods (penalized logistic regression, LASSO, gradient boosting, random forest and support vector machines) and three LEMs (TreeExplainer, Kernel SHAP, and the conditional normal sampling importance (CNSI)) were combined into twelve explanation approaches. These approaches were used to compute explanations for simulated data, and real-world industrial data with simulate...
Increasingly complex learning methods such as boosting, bagging and deep learning have made ML model...
Nonlinear dimensionality reduction (NLDR) algorithms such as t-SNE are often employed to visually an...
The lack of interpretability of machine learning models is a drawback of their use. To better unders...
Prediction methods can be augmented by local explanation methods (LEMs) to perform root cause analys...
International audienceLocal additive explanation methods are increasingly used to understand the pre...
In recent years the use of complex machine learning has increased drastically. These complex black b...
A key challenge for decision makers when incorporating black box machine learned models into practic...
The increased predictive power of nonlinear models comes at the cost of interpretability of its term...
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the i...
With the advancement of technology for artificial intelligence (AI) based solutions and analytics co...
Machine learning and artificial intelligence (ML/AI), previously considered black box approaches, ar...
Locally interpretable model agnostic explanations (LIME) method is one of the most popular methods u...
Local explanations aim to provide transparency for individual instances and their associated predict...
The thesis tackles two problems in the recently-born field of Explainable AI (XAI), and proposes som...
Accumulated Local Effect (ALE) is a method for accurately estimating feature effects, overcoming fun...
Increasingly complex learning methods such as boosting, bagging and deep learning have made ML model...
Nonlinear dimensionality reduction (NLDR) algorithms such as t-SNE are often employed to visually an...
The lack of interpretability of machine learning models is a drawback of their use. To better unders...
Prediction methods can be augmented by local explanation methods (LEMs) to perform root cause analys...
International audienceLocal additive explanation methods are increasingly used to understand the pre...
In recent years the use of complex machine learning has increased drastically. These complex black b...
A key challenge for decision makers when incorporating black box machine learned models into practic...
The increased predictive power of nonlinear models comes at the cost of interpretability of its term...
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the i...
With the advancement of technology for artificial intelligence (AI) based solutions and analytics co...
Machine learning and artificial intelligence (ML/AI), previously considered black box approaches, ar...
Locally interpretable model agnostic explanations (LIME) method is one of the most popular methods u...
Local explanations aim to provide transparency for individual instances and their associated predict...
The thesis tackles two problems in the recently-born field of Explainable AI (XAI), and proposes som...
Accumulated Local Effect (ALE) is a method for accurately estimating feature effects, overcoming fun...
Increasingly complex learning methods such as boosting, bagging and deep learning have made ML model...
Nonlinear dimensionality reduction (NLDR) algorithms such as t-SNE are often employed to visually an...
The lack of interpretability of machine learning models is a drawback of their use. To better unders...