Watching a speaker\u27s facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarde...
Information processing in the sensory modalities is not segregated but interacts strongly. The exact...
Although multistable perception has long been studied, in recent years, paradigms involving ambiguou...
Speech is perceived with both the ears and the eyes. Adding congruent visual speech improves the per...
Watching a speaker\u27s facial movements can dramatically enhance our ability to comprehend words, e...
Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, espe...
We assessed how synchronous speech listening and lipreading affects speech recognition in acoustic n...
We assessed how synchronous speech listening and lipreading affects speech recognition in acoustic n...
Viewing a speaker’s articulatory movements substantially improves a listener’s ability to understand...
Background: Different sources of sensory information can interact, often shaping what we think we ha...
Humans typically make near-optimal sensorimotor judgements but show systematic biases when making mo...
Humans are effective at dealing with noisy, probabilistic information in familiar settings. One hall...
Recognising speech in background noise is a strenuous daily activity, yet most humans can master it....
In order to survive and function in the world, we must understand the content of our environment. Th...
Published online: 07 February 2017Supplementary information: https://images.nature.com/original/natu...
Can we model speech recognition in noise by exploring higher order statistics of the combined signal...
Information processing in the sensory modalities is not segregated but interacts strongly. The exact...
Although multistable perception has long been studied, in recent years, paradigms involving ambiguou...
Speech is perceived with both the ears and the eyes. Adding congruent visual speech improves the per...
Watching a speaker\u27s facial movements can dramatically enhance our ability to comprehend words, e...
Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, espe...
We assessed how synchronous speech listening and lipreading affects speech recognition in acoustic n...
We assessed how synchronous speech listening and lipreading affects speech recognition in acoustic n...
Viewing a speaker’s articulatory movements substantially improves a listener’s ability to understand...
Background: Different sources of sensory information can interact, often shaping what we think we ha...
Humans typically make near-optimal sensorimotor judgements but show systematic biases when making mo...
Humans are effective at dealing with noisy, probabilistic information in familiar settings. One hall...
Recognising speech in background noise is a strenuous daily activity, yet most humans can master it....
In order to survive and function in the world, we must understand the content of our environment. Th...
Published online: 07 February 2017Supplementary information: https://images.nature.com/original/natu...
Can we model speech recognition in noise by exploring higher order statistics of the combined signal...
Information processing in the sensory modalities is not segregated but interacts strongly. The exact...
Although multistable perception has long been studied, in recent years, paradigms involving ambiguou...
Speech is perceived with both the ears and the eyes. Adding congruent visual speech improves the per...