As the applications of Natural Language Processing (NLP) in sensitive areas like Political Profiling, Review of Essays in Education, etc. proliferate, there is a great need for increasing transparency in NLP models to build trust with stakeholders and identify biases. A lot of work in Explainable AI has aimed to devise explanation methods that give humans insights into the workings and predictions of NLP models. While these methods distill predictions from complex models like Neural Networks into consumable explanations, how humans understand these explanations is still widely unexplored. Innate human tendencies and biases can handicap the understanding of these explanations in humans, and can also lead to them misjudging models and predict...
Many explainability methods have been proposed as a means of understanding how a learned machine lea...
This thesis is focused on exploring explainable AI algorithms and in particular Layer-Wise Relevance...
Explainable AI provides insights to users into the why for model predictions, offering potential for...
Neural networks for NLP are becoming increasingly complex and widespread, and there is a growing con...
With more data and computing resources available these days, we have seen many novel Natural Languag...
In the past decade, natural language processing (NLP) systems have come to be built almost exclusive...
Natural Language Inference (NLI) models are known to learn from biases and artefacts within their tr...
As the demand for explainable deep learning grows in the evaluation of language technologies, the va...
A multitude of explainability methods and associated fidelity performance metrics have been proposed...
Explainable AI (XAI) is a research field dedicated to formulating avenues of breaching the black box...
While a lot of research in explainable AI focuses on producing effective explanations, less work is ...
Explainable artificial intelligence and interpretable machine learning are research fields growing i...
Human insights play an essential role in artificial intelligence (AI) systems as it increases the co...
Saliency methods are frequently used to explain Deep Neural Network-based models. Adebayo et al.'s w...
A popular approach to unveiling the black box of neural NLP models is to leverage saliency methods, ...
Many explainability methods have been proposed as a means of understanding how a learned machine lea...
This thesis is focused on exploring explainable AI algorithms and in particular Layer-Wise Relevance...
Explainable AI provides insights to users into the why for model predictions, offering potential for...
Neural networks for NLP are becoming increasingly complex and widespread, and there is a growing con...
With more data and computing resources available these days, we have seen many novel Natural Languag...
In the past decade, natural language processing (NLP) systems have come to be built almost exclusive...
Natural Language Inference (NLI) models are known to learn from biases and artefacts within their tr...
As the demand for explainable deep learning grows in the evaluation of language technologies, the va...
A multitude of explainability methods and associated fidelity performance metrics have been proposed...
Explainable AI (XAI) is a research field dedicated to formulating avenues of breaching the black box...
While a lot of research in explainable AI focuses on producing effective explanations, less work is ...
Explainable artificial intelligence and interpretable machine learning are research fields growing i...
Human insights play an essential role in artificial intelligence (AI) systems as it increases the co...
Saliency methods are frequently used to explain Deep Neural Network-based models. Adebayo et al.'s w...
A popular approach to unveiling the black box of neural NLP models is to leverage saliency methods, ...
Many explainability methods have been proposed as a means of understanding how a learned machine lea...
This thesis is focused on exploring explainable AI algorithms and in particular Layer-Wise Relevance...
Explainable AI provides insights to users into the why for model predictions, offering potential for...