Pretrained language models (PLMs) form the basis of most state-of-the-art NLP technologies. Nevertheless, they are essentially black boxes: Humans do not have a clear understanding of what knowledge is encoded in different parts of the models, especially in individual neurons. The situation is different in computer vision, where feature visualization provides a decompositional interpretability technique for neurons of vision models. Activation maximization is used to synthesize inherently interpretable visual representations of the information encoded in individual neurons. Our work is inspired by this but presents a cautionary tale on the interpretability of single neurons, based on the first large-scale attempt to adapt activation maximiz...
Modern neural networks specialised in natural language processing (NLP) are not implemented with any...
International audienceNeural Language Models (NLMs) have made tremendous advances during the last ye...
Published: 12 October 2021Over the past 2 decades, researchers have tried to uncover how the human b...
Linking computational natural language processing (NLP) models and neural responses to language in t...
One of the roadblocks to a better understanding of neural networks' internals is \textit{polysemanti...
We analyze a family of large language models in such a lightweight manner that can be done on a sing...
Several studies investigated the linguistic information implicitly encoded in Neural Language Models...
Single neurons in neural networks are often interpretable in that they represent individual, intuiti...
While many studies have shown that linguistic information is encoded in hidden word representations,...
How can one conceive of the neuronal implementation of the processing model we proposed in our targe...
Despite the remarkable evolution of deep neural networks in natural language processing (NLP), their...
Several studies investigated the linguistic information implicitly encoded in Neural Language Models...
Large pre-trained language models (PLMs) such as BERT and XLNet have revolutionized the field of nat...
A key challenge for cognitive neuroscience is deciphering the representational schemes of the brain....
Classifiers trained on auxiliary probing tasks are a popular tool to analyze the representations lea...
Modern neural networks specialised in natural language processing (NLP) are not implemented with any...
International audienceNeural Language Models (NLMs) have made tremendous advances during the last ye...
Published: 12 October 2021Over the past 2 decades, researchers have tried to uncover how the human b...
Linking computational natural language processing (NLP) models and neural responses to language in t...
One of the roadblocks to a better understanding of neural networks' internals is \textit{polysemanti...
We analyze a family of large language models in such a lightweight manner that can be done on a sing...
Several studies investigated the linguistic information implicitly encoded in Neural Language Models...
Single neurons in neural networks are often interpretable in that they represent individual, intuiti...
While many studies have shown that linguistic information is encoded in hidden word representations,...
How can one conceive of the neuronal implementation of the processing model we proposed in our targe...
Despite the remarkable evolution of deep neural networks in natural language processing (NLP), their...
Several studies investigated the linguistic information implicitly encoded in Neural Language Models...
Large pre-trained language models (PLMs) such as BERT and XLNet have revolutionized the field of nat...
A key challenge for cognitive neuroscience is deciphering the representational schemes of the brain....
Classifiers trained on auxiliary probing tasks are a popular tool to analyze the representations lea...
Modern neural networks specialised in natural language processing (NLP) are not implemented with any...
International audienceNeural Language Models (NLMs) have made tremendous advances during the last ye...
Published: 12 October 2021Over the past 2 decades, researchers have tried to uncover how the human b...