The entropy-reduction hypothesis claims that the cognitive processing difficulty on a word in sentence context is determined by the word's effect on the uncertainty about the sentence. Here, this hypothesis is tested more thoroughly than has been done before, using a recurrent neural network for estimating entropy and self-paced reading for obtaining measures of cognitive processing load. Results show a positive relation between reading time on a word and the reduction in entropy due to processing that word, supporting the entropy-reduction hypothesis. Although this effect is independent from the effect of word surprisal, we find no evidence that these two measures correspond to cognitively distinct processes
The human brain processes language to optimise efficient communication. Studies have shown extensive...
The 'unlexicalized surprisal' of a word in sentence context is defined as the negative logarithm of ...
Contains fulltext : 235107.pdf (Publisher’s version ) (Open Access)Workshop on Cog...
We outline four ways in which uncertainty might affect comprehension difficulty in human sentence pr...
What are the effects of word‐by‐word predictability on sentence processing times during the natural ...
What are the effects of word-by-word predictability on sentence processing times during the natural ...
Human reading behavior is sensitive to surprisal: more predictable words tend to be read faster. Une...
The notion of prediction is studied in cognitive neuroscience with increasing intensity. We investig...
AbstractReading times on words in a sentence depend on the amount of information the words convey, w...
Information theoretic measures of incremental parser load were generated from a phrase structure par...
Measures of entropy are useful for explaining the behaviour of cognitive models. We demonstrate tha...
This paper reveals errors within Norwich et al.’s Entropy Theory of Perception, errors that have bro...
A word’s predictability or surprisal, as determined by cloze probabilities or language models (Frank...
Original paper can be found at: http://www.aisb.org.uk/publications/proceedings/aisb05/1_EELC_Final....
corpus. We derive greater surprisals (Hale, 2001) for the unergative than the unaccusative case, whi...
The human brain processes language to optimise efficient communication. Studies have shown extensive...
The 'unlexicalized surprisal' of a word in sentence context is defined as the negative logarithm of ...
Contains fulltext : 235107.pdf (Publisher’s version ) (Open Access)Workshop on Cog...
We outline four ways in which uncertainty might affect comprehension difficulty in human sentence pr...
What are the effects of word‐by‐word predictability on sentence processing times during the natural ...
What are the effects of word-by-word predictability on sentence processing times during the natural ...
Human reading behavior is sensitive to surprisal: more predictable words tend to be read faster. Une...
The notion of prediction is studied in cognitive neuroscience with increasing intensity. We investig...
AbstractReading times on words in a sentence depend on the amount of information the words convey, w...
Information theoretic measures of incremental parser load were generated from a phrase structure par...
Measures of entropy are useful for explaining the behaviour of cognitive models. We demonstrate tha...
This paper reveals errors within Norwich et al.’s Entropy Theory of Perception, errors that have bro...
A word’s predictability or surprisal, as determined by cloze probabilities or language models (Frank...
Original paper can be found at: http://www.aisb.org.uk/publications/proceedings/aisb05/1_EELC_Final....
corpus. We derive greater surprisals (Hale, 2001) for the unergative than the unaccusative case, whi...
The human brain processes language to optimise efficient communication. Studies have shown extensive...
The 'unlexicalized surprisal' of a word in sentence context is defined as the negative logarithm of ...
Contains fulltext : 235107.pdf (Publisher’s version ) (Open Access)Workshop on Cog...