We assessed how synchronous speech listening and lipreading affects speech recognition in acoustic noise. In simple audiovisual perceptual tasks, inverse effectiveness is often observed, which holds that the weaker the unimodal stimuli, or the poorer their signal-to-noise ratio, the stronger the audiovisual benefit. So far, however, inverse effectiveness has not been demonstrated for complex audiovisual speech stimuli. Here we assess whether this multisensory integration effect can also be observed for the recognizability of spoken words. To that end, we presented audiovisual sentences to 18 native-Dutch normal-hearing participants, who had to identify the spoken words from a finite list. Speech-recognition performance was determined for au...
International audienceSeeing the facial gestures of a speaker enhances phonemic identification in no...
Speech perception requires grouping acoustic information into meaningful linguistic-phonetic units v...
In face-to-face conversations, listeners process and combine speech information obtained from hearin...
We assessed how synchronous speech listening and lipreading affects speech recognition in acoustic n...
Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, espe...
Viewing a speaker’s articulatory movements substantially improves a listener’s ability to understand...
International audienceLip reading is the ability to partially understand speech by looking at the sp...
While everyone has experienced that seeing lip movements may improve speech perception, little is kn...
ABSTRACT—Speech perception is inherently multimodal. Visual speech (lip-reading) information is used...
Watching a speaker\u27s facial movements can dramatically enhance our ability to comprehend words, e...
Speech perception is a bimodal process that involves both auditory and visual inputs. The auditory s...
In most of our everyday conversations, we not only hear but also see each other talk. Our understand...
THESIS 11265Seeing a speaker?s face as he or she talks can greatly help in understanding what the sp...
AbstractInverse effectiveness, one of the three principles of multisensory integration, was formulat...
Objectives: In noisy environments, listeners benefit from both hearing and seeing a talker, demonstr...
International audienceSeeing the facial gestures of a speaker enhances phonemic identification in no...
Speech perception requires grouping acoustic information into meaningful linguistic-phonetic units v...
In face-to-face conversations, listeners process and combine speech information obtained from hearin...
We assessed how synchronous speech listening and lipreading affects speech recognition in acoustic n...
Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, espe...
Viewing a speaker’s articulatory movements substantially improves a listener’s ability to understand...
International audienceLip reading is the ability to partially understand speech by looking at the sp...
While everyone has experienced that seeing lip movements may improve speech perception, little is kn...
ABSTRACT—Speech perception is inherently multimodal. Visual speech (lip-reading) information is used...
Watching a speaker\u27s facial movements can dramatically enhance our ability to comprehend words, e...
Speech perception is a bimodal process that involves both auditory and visual inputs. The auditory s...
In most of our everyday conversations, we not only hear but also see each other talk. Our understand...
THESIS 11265Seeing a speaker?s face as he or she talks can greatly help in understanding what the sp...
AbstractInverse effectiveness, one of the three principles of multisensory integration, was formulat...
Objectives: In noisy environments, listeners benefit from both hearing and seeing a talker, demonstr...
International audienceSeeing the facial gestures of a speaker enhances phonemic identification in no...
Speech perception requires grouping acoustic information into meaningful linguistic-phonetic units v...
In face-to-face conversations, listeners process and combine speech information obtained from hearin...