Deep learning (DL) approaches may also inform the analysis of human brain activity. Here, a state-of-art DL tool for natural language processing, the Generative Pre-trained Transformer version 2 (GPT-2), is shown to generate meaningful neural encodings in functional MRI during narrative listening. Linguistic features of word unpredictability (surprisal) and contextual importance (saliency) were derived from the GPT-2 applied to the text of a 12-min narrative. Segments of variable duration (from 15 to 90 s) defined the context for the next word, resulting in different sets of neural predictors for functional MRI signals recorded in 27 healthy listeners of the narrative. GPT-2 surprisal, estimating word prediction errors from the artific...
Probabilistic language models are increasingly used to provide neural representations of linguistic ...
International audienceSeveral popular Transformer based language models have been found to be succes...
Item does not contain fulltextIn contextually rich language comprehension settings listeners can rel...
Deep learning (DL) approaches may also inform the analysis of human brain activity. Here, a state-of...
International audienceDeep language algorithms, like GPT-2, have demonstrated remarkable abilities t...
International audienceConsiderable progress has recently been made in natural language processing: d...
How does the human brain construct narratives from a sequence of spoken words? Here we present a ben...
Neuroimaging studies of language have typically focused on either production or comprehension of sin...
Several popular sequence-based and pretrained language models have been found to be successful for t...
The notion of prediction is studied in cognitive neuroscience with increasing intensity. We investig...
Several popular Transformer based language models have been found to be successful for text-driven b...
Functional activation for language processing in left hemisphere language regions has been shown to ...
International audienceNeural Language Models (NLMs) have made tremendous advances during the last ye...
International audienceThe activations of language transformers like GPT-2 have been shown to linearl...
Abstract Speech comprehension is a complex process that draws on humans’ abilities to extract lexica...
Probabilistic language models are increasingly used to provide neural representations of linguistic ...
International audienceSeveral popular Transformer based language models have been found to be succes...
Item does not contain fulltextIn contextually rich language comprehension settings listeners can rel...
Deep learning (DL) approaches may also inform the analysis of human brain activity. Here, a state-of...
International audienceDeep language algorithms, like GPT-2, have demonstrated remarkable abilities t...
International audienceConsiderable progress has recently been made in natural language processing: d...
How does the human brain construct narratives from a sequence of spoken words? Here we present a ben...
Neuroimaging studies of language have typically focused on either production or comprehension of sin...
Several popular sequence-based and pretrained language models have been found to be successful for t...
The notion of prediction is studied in cognitive neuroscience with increasing intensity. We investig...
Several popular Transformer based language models have been found to be successful for text-driven b...
Functional activation for language processing in left hemisphere language regions has been shown to ...
International audienceNeural Language Models (NLMs) have made tremendous advances during the last ye...
International audienceThe activations of language transformers like GPT-2 have been shown to linearl...
Abstract Speech comprehension is a complex process that draws on humans’ abilities to extract lexica...
Probabilistic language models are increasingly used to provide neural representations of linguistic ...
International audienceSeveral popular Transformer based language models have been found to be succes...
Item does not contain fulltextIn contextually rich language comprehension settings listeners can rel...