International audienceNeural Language Models (NLMs) have made tremendous advances during the last years, achieving impressive performance on various linguistic tasks. Capitalizing on this, studies in neuroscience have started to use NLMs to study neural activity in the human brain during language processing. However, many questions remain unanswered regarding which factors determine the ability of a neural language model to capture brain activity (aka its 'brain score'). Here, we make first steps in this direction and examine the impact of test loss, training corpus and model architecture (comparing GloVe, LSTM, GPT-2 and BERT), on the prediction of functional Magnetic Resonance Imaging timecourses of participants listening to an audiobook....
Item does not contain fulltextIn contextually rich language comprehension settings listeners can rel...
International audienceSeveral popular sequence-based and pretrained language models have been found ...
Why do artificial neural networks model language so well? We claim that in order to answer this ques...
International audienceNeural Language Models (NLMs) have made tremendous advances during the last ye...
Neural Language Models (NLMs) have made tremendous advances during the last years, achieving impress...
Several popular Transformer based language models have been found to be successful for text-driven b...
International audienceAn interesting way to evaluate the representations obtained with machine learn...
Several major innovations in artificial intelligence (AI) (e.g. convolutional neural networks, exper...
The neuroscience of perception has recently been revolutionized with an integrative modeling approac...
The outstanding performance recently reached by Neural Language Models (NLMs) across many Natural La...
Linking computational natural language processing (NLP) models and neural responses to language in t...
International audienceDeep learning algorithms trained to predict masked words from large amount of ...
International audienceFunctional brain images are rich and noisy data that can capture indirect sign...
International audienceA popular approach to decompose the neural bases of language consists in corre...
International audienceSeveral popular Transformer based language models have been found to be succes...
Item does not contain fulltextIn contextually rich language comprehension settings listeners can rel...
International audienceSeveral popular sequence-based and pretrained language models have been found ...
Why do artificial neural networks model language so well? We claim that in order to answer this ques...
International audienceNeural Language Models (NLMs) have made tremendous advances during the last ye...
Neural Language Models (NLMs) have made tremendous advances during the last years, achieving impress...
Several popular Transformer based language models have been found to be successful for text-driven b...
International audienceAn interesting way to evaluate the representations obtained with machine learn...
Several major innovations in artificial intelligence (AI) (e.g. convolutional neural networks, exper...
The neuroscience of perception has recently been revolutionized with an integrative modeling approac...
The outstanding performance recently reached by Neural Language Models (NLMs) across many Natural La...
Linking computational natural language processing (NLP) models and neural responses to language in t...
International audienceDeep learning algorithms trained to predict masked words from large amount of ...
International audienceFunctional brain images are rich and noisy data that can capture indirect sign...
International audienceA popular approach to decompose the neural bases of language consists in corre...
International audienceSeveral popular Transformer based language models have been found to be succes...
Item does not contain fulltextIn contextually rich language comprehension settings listeners can rel...
International audienceSeveral popular sequence-based and pretrained language models have been found ...
Why do artificial neural networks model language so well? We claim that in order to answer this ques...