International audienceDeep language algorithms, like GPT-2, have demonstrated remarkable abilities to process text, and now constitute the backbone of automatic translation, summarization and dialogue. However, whether these models encode information that relates to human comprehension still remains controversial. Here, we show that the representations of GPT-2 not only map onto the brain responses to spoken stories, but they also predict the extent to which subjects understand the corresponding narratives. To this end, we analyze 101 subjects recorded with functional Magnetic Resonance Imaging while listening to 70 min of short stories. We then fit a linear mapping model to predict brain activity from GPT-2’s activations. Finally, we show ...
Several popular Transformer based language models have been found to be successful for text-driven b...
The neuroscience of perception has recently been revolutionized with an integrative modeling approac...
Several popular sequence-based and pretrained language models have been found to be successful for t...
International audienceDeep language algorithms, like GPT-2, have demonstrated remarkable abilities t...
Deep learning (DL) approaches may also inform the analysis of human brain activity. Here, a state-of...
International audienceConsiderable progress has recently been made in natural language processing: d...
International audienceThe activations of language transformers like GPT-2 have been shown to linearl...
International audienceDeep learning algorithms trained to predict masked words from large amount of ...
International audienceA popular approach to decompose the neural bases of language consists in corre...
Theorists propose that the brain constantly generates implicit predictions that guide information pr...
How is information organized in the brain during natural reading? Where and when do the required pro...
How does the human brain construct narratives from a sequence of spoken words? Here we present a ben...
International audienceNeural Language Models (NLMs) have made tremendous advances during the last ye...
Story understanding involves many perceptual and cognitive subprocesses, from perceiving individual ...
International audienceSeveral popular Transformer based language models have been found to be succes...
Several popular Transformer based language models have been found to be successful for text-driven b...
The neuroscience of perception has recently been revolutionized with an integrative modeling approac...
Several popular sequence-based and pretrained language models have been found to be successful for t...
International audienceDeep language algorithms, like GPT-2, have demonstrated remarkable abilities t...
Deep learning (DL) approaches may also inform the analysis of human brain activity. Here, a state-of...
International audienceConsiderable progress has recently been made in natural language processing: d...
International audienceThe activations of language transformers like GPT-2 have been shown to linearl...
International audienceDeep learning algorithms trained to predict masked words from large amount of ...
International audienceA popular approach to decompose the neural bases of language consists in corre...
Theorists propose that the brain constantly generates implicit predictions that guide information pr...
How is information organized in the brain during natural reading? Where and when do the required pro...
How does the human brain construct narratives from a sequence of spoken words? Here we present a ben...
International audienceNeural Language Models (NLMs) have made tremendous advances during the last ye...
Story understanding involves many perceptual and cognitive subprocesses, from perceiving individual ...
International audienceSeveral popular Transformer based language models have been found to be succes...
Several popular Transformer based language models have been found to be successful for text-driven b...
The neuroscience of perception has recently been revolutionized with an integrative modeling approac...
Several popular sequence-based and pretrained language models have been found to be successful for t...