Large Language Models (LLMs) have significantly advanced the field of Natural Language Processing (NLP), but their lack of interpretability has been a major concern. Current methods for interpreting LLMs are post hoc, applied after inference time, and have limitations such as their focus on low-level features and lack of explainability at higher level text units. In this work, we introduce proto-lm, a prototypical network-based white-box framework that allows LLMs to learn immediately interpretable embeddings during the fine-tuning stage while maintaining competitive performance. Our method's applicability and interpretability are demonstrated through experiments on a wide range of NLP tasks, and our results indicate a new possibility of cr...
The reusability of state-of-the-art Pre-trained Language Models (PLMs) is often limited by their gen...
Understanding black-box machine learning models is important towards their widespread adoption. Howe...
Unexplainable black-box models create scenarios where anomalies cause deleterious responses, thus cr...
While Transformer language models (LMs) are state-of-the-art for information extraction, long text i...
In this paper, we move towards combining large parametric models with non-parametric prototypical ne...
Pretrained language models have become the standard approach for many NLP tasks due to strong perfor...
We present ProtoTEx, a novel white-box NLP classification architecture based on prototype networks. ...
Transformer-based pretrained language models (LMs) are ubiquitous across natural language understand...
Pretrained language models are expected to effectively map input text to a set of vectors while pres...
Recently, the development of pre-trained language models has brought natural language processing (NL...
Large language models have exhibited emergent abilities, demonstrating exceptional performance acros...
Realizing the recent advances in Natural Language Processing (NLP) to the legal sector poses challen...
Large language models (LLMs) have achieved remarkable advancements in the field of natural language ...
We propose the LLMs4OL approach, which utilizes Large Language Models (LLMs) for Ontology Learning (...
In recent years, there has been significant progress in developing pre-trained language models for N...
The reusability of state-of-the-art Pre-trained Language Models (PLMs) is often limited by their gen...
Understanding black-box machine learning models is important towards their widespread adoption. Howe...
Unexplainable black-box models create scenarios where anomalies cause deleterious responses, thus cr...
While Transformer language models (LMs) are state-of-the-art for information extraction, long text i...
In this paper, we move towards combining large parametric models with non-parametric prototypical ne...
Pretrained language models have become the standard approach for many NLP tasks due to strong perfor...
We present ProtoTEx, a novel white-box NLP classification architecture based on prototype networks. ...
Transformer-based pretrained language models (LMs) are ubiquitous across natural language understand...
Pretrained language models are expected to effectively map input text to a set of vectors while pres...
Recently, the development of pre-trained language models has brought natural language processing (NL...
Large language models have exhibited emergent abilities, demonstrating exceptional performance acros...
Realizing the recent advances in Natural Language Processing (NLP) to the legal sector poses challen...
Large language models (LLMs) have achieved remarkable advancements in the field of natural language ...
We propose the LLMs4OL approach, which utilizes Large Language Models (LLMs) for Ontology Learning (...
In recent years, there has been significant progress in developing pre-trained language models for N...
The reusability of state-of-the-art Pre-trained Language Models (PLMs) is often limited by their gen...
Understanding black-box machine learning models is important towards their widespread adoption. Howe...
Unexplainable black-box models create scenarios where anomalies cause deleterious responses, thus cr...