In order to engender trust in AI, humans must understand what an AI system is trying to achieve, and why. To overcome this problem, the underlying AI process must produce justifications and explanations that are both transparent and comprehensible to the user. AI Planning is well placed to be able to address this challenge. In this paper we present a methodology to provide initial explanations for the decisions made by the planner. Explanations are created by allowing the user to suggest alternative actions in plans and then compare the resulting plans with the one found by the planner. The methodology is implemented in the new XAI-Plan framework
Over the last two years, The Alan Turing Institute and the Information Commissioner’s Office (ICO) h...
. Expert systems are typically expected to be able to justify their decisions to the user. This pape...
Automated decision making (ADM) systems have become ubiquitous in our everyday lives, enabling new b...
Explainable AI is an important area of research within which Explainable Planning is an emerging top...
Smart home systems with AI planning functionality have the potential to improve the lives of users. ...
With the increasing ubiquity of Artificial Intelligence (AI) in daily life and into more critical ap...
The fast progress in artificial intelligence (AI), combined with the constantly widening scope of it...
Explainability considered a critical component of trustworthy artificial intelligence (AI) systems, ...
The most effective Artificial Intelligence (AI) systems exploit complex machine learning models to f...
Governments look at explainable artificial intelligence's (XAI) potential to tackle the criticisms o...
An important type of question that arises in Explainable Planning is a contrastive question, of the ...
Search-based AI agents are state of the art in many challenging sequential decision-making domains. ...
Planning is an important sub-field of artificial intelligence (AI) focusing on letting intelligent a...
Over the last two years, The Alan Turing Institute and the Information Commissioner’s Office (ICO) h...
Recent work on interpretability in machine learning and AI has focused on the building of simplified...
Over the last two years, The Alan Turing Institute and the Information Commissioner’s Office (ICO) h...
. Expert systems are typically expected to be able to justify their decisions to the user. This pape...
Automated decision making (ADM) systems have become ubiquitous in our everyday lives, enabling new b...
Explainable AI is an important area of research within which Explainable Planning is an emerging top...
Smart home systems with AI planning functionality have the potential to improve the lives of users. ...
With the increasing ubiquity of Artificial Intelligence (AI) in daily life and into more critical ap...
The fast progress in artificial intelligence (AI), combined with the constantly widening scope of it...
Explainability considered a critical component of trustworthy artificial intelligence (AI) systems, ...
The most effective Artificial Intelligence (AI) systems exploit complex machine learning models to f...
Governments look at explainable artificial intelligence's (XAI) potential to tackle the criticisms o...
An important type of question that arises in Explainable Planning is a contrastive question, of the ...
Search-based AI agents are state of the art in many challenging sequential decision-making domains. ...
Planning is an important sub-field of artificial intelligence (AI) focusing on letting intelligent a...
Over the last two years, The Alan Turing Institute and the Information Commissioner’s Office (ICO) h...
Recent work on interpretability in machine learning and AI has focused on the building of simplified...
Over the last two years, The Alan Turing Institute and the Information Commissioner’s Office (ICO) h...
. Expert systems are typically expected to be able to justify their decisions to the user. This pape...
Automated decision making (ADM) systems have become ubiquitous in our everyday lives, enabling new b...