Despite the remarkable evolution of deep neural networks in natural language processing (NLP), their interpretability remains a challenge. Previous work largely focused on what these models learn at the representation level. We break this analysis down further and study individual dimensions (neurons) in the vector representation learned by end-to-end neural models in NLP tasks. We propose two methods: Linguistic Correlation Analysis, based on a supervised method to extract the most relevant neurons with respect to an extrinsic task, and Cross-model Correlation Analysis, an unsupervised method to extract salient neurons w.r.t. the model itself. We evaluate the effectiveness of our techniques by ablating the identified neurons and reevaluati...
<div><p>The performance of deep learning in natural language processing has been spectacular, but th...
In this thesis, I explore neural machine translation (NMT) models via targeted investigation of vari...
© 2019 Association for Computational Linguistics Neural language models have achieved state-of-the-a...
Natural language processing models based on machine learning (ML-NLP models) have been developed to ...
Recent NLP studies reveal that substantial linguistic information can be attributed to single neuron...
Neural language models learn word representations that capture rich linguistic and conceptual inform...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Comp...
Linking computational natural language processing (NLP) models and neural responses to language in t...
International audienceDeep learning algorithms trained to predict masked words from large amount of ...
We present a methodology that explores how sentence structure is reflected in neural representations...
International audienceEfforts to understand the brain bases of language face the mapping problem: at...
Interpreting deep neural networks is of great importance to understand and verify deep models for na...
While many studies have shown that linguistic information is encoded in hidden word representations,...
In the field of natural language processing (NLP), recent research has shown that deep neural networ...
A Language Model (LM) is a helpful component of a variety of Natural Language Processing (NLP) syste...
<div><p>The performance of deep learning in natural language processing has been spectacular, but th...
In this thesis, I explore neural machine translation (NMT) models via targeted investigation of vari...
© 2019 Association for Computational Linguistics Neural language models have achieved state-of-the-a...
Natural language processing models based on machine learning (ML-NLP models) have been developed to ...
Recent NLP studies reveal that substantial linguistic information can be attributed to single neuron...
Neural language models learn word representations that capture rich linguistic and conceptual inform...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Comp...
Linking computational natural language processing (NLP) models and neural responses to language in t...
International audienceDeep learning algorithms trained to predict masked words from large amount of ...
We present a methodology that explores how sentence structure is reflected in neural representations...
International audienceEfforts to understand the brain bases of language face the mapping problem: at...
Interpreting deep neural networks is of great importance to understand and verify deep models for na...
While many studies have shown that linguistic information is encoded in hidden word representations,...
In the field of natural language processing (NLP), recent research has shown that deep neural networ...
A Language Model (LM) is a helpful component of a variety of Natural Language Processing (NLP) syste...
<div><p>The performance of deep learning in natural language processing has been spectacular, but th...
In this thesis, I explore neural machine translation (NMT) models via targeted investigation of vari...
© 2019 Association for Computational Linguistics Neural language models have achieved state-of-the-a...