Abstract In the future, more and more systems will be powered by AI. This may exacerbate existing blind spots in explainability research, such as focusing on outputs of an individual AI pipeline as opposed to a holistic and integrative view on the system dynamics of data, algorithms, stakeholders, context and their respective interactions. AI systems will increasingly rely on patterns and models of other AI systems. This will likely introduce a major shift in the desiderata of interpretability, explainability and transparency. In this world of Cascading AI (CAI), AI systems will use the output of other AI systems as their inputs. The typical formulations of desiderata for explaining AI decision-making, such as post-hoc interpretability or ...
Recent work in explainable artificial intelligence (XAI) attempts to render opaque AI systems unders...
An important subdomain in research on Human-Artificial Intelligence interaction is Explainable AI (X...
reliance on internal representation. Further discussion showed that many of the hard prob-lems encou...
In this paper, we explore the use of metaphors for people working with artificial intelligence, in p...
A key challenge in the design of AI systems is how to support people in understanding them. We addre...
We characterize three notions of explainable AI that cut across research fields: opaque systems that...
Recent work on interpretability in machine learning and AI has focused on the building of simplified...
This research paper explores self-explaining AI models that bridge the gap between complex black-box...
Recent work on interpretability in machine learning and AI has focused on the building of simplified...
Artificial intelligence (AI) has shown great potential in many real-world applications, for example,...
Governments look at explainable artificial intelligence's (XAI) potential to tackle the criticisms o...
The rapid growth of research in explainable artificial intelligence (XAI) follows on two substantial...
In 1991 the researchers at the center for the Learning Sciences of Carnegie Mellon University were c...
Exploring end-users’ understanding of Artificial Intelligence (AI) systems’ behaviours and outputs i...
eXplainable AI focuses on generating explanations for the output of an AI algorithm to a user, usual...
Recent work in explainable artificial intelligence (XAI) attempts to render opaque AI systems unders...
An important subdomain in research on Human-Artificial Intelligence interaction is Explainable AI (X...
reliance on internal representation. Further discussion showed that many of the hard prob-lems encou...
In this paper, we explore the use of metaphors for people working with artificial intelligence, in p...
A key challenge in the design of AI systems is how to support people in understanding them. We addre...
We characterize three notions of explainable AI that cut across research fields: opaque systems that...
Recent work on interpretability in machine learning and AI has focused on the building of simplified...
This research paper explores self-explaining AI models that bridge the gap between complex black-box...
Recent work on interpretability in machine learning and AI has focused on the building of simplified...
Artificial intelligence (AI) has shown great potential in many real-world applications, for example,...
Governments look at explainable artificial intelligence's (XAI) potential to tackle the criticisms o...
The rapid growth of research in explainable artificial intelligence (XAI) follows on two substantial...
In 1991 the researchers at the center for the Learning Sciences of Carnegie Mellon University were c...
Exploring end-users’ understanding of Artificial Intelligence (AI) systems’ behaviours and outputs i...
eXplainable AI focuses on generating explanations for the output of an AI algorithm to a user, usual...
Recent work in explainable artificial intelligence (XAI) attempts to render opaque AI systems unders...
An important subdomain in research on Human-Artificial Intelligence interaction is Explainable AI (X...
reliance on internal representation. Further discussion showed that many of the hard prob-lems encou...