Speech perception often benefits from vision of the speaker's lip movements when they are available. One potential mechanism underlying this reported gain in perception arising from audio-visual integration is on-line prediction. In this study we address whether the preceding speech context in a single modality can improve audiovisual processing and whether this improvement is based on on-line information-transfer across sensory modalities. In the experiments presented here, during each trial, a speech fragment (context) presented in a single sensory modality (voice or lips) was immediately continued by an audiovisual target fragment. Participants made speeded judgments about whether voice and lips were in agreement in the target fragment. ...
While everyone has experienced that seeing lip movements may improve speech perception, little is kn...
The McGurk effect is a textbook illustration of the automaticity with which the human brain integrat...
International audienceIn pandemic times, when visual speech cues are masked, it becomes particularly...
Speech perception often benefits from vision of the speaker's lip movements when they are available....
The sight of a speaker’s facial movements during the perception of a spoken message can benefit spee...
ABSTRACT—Speech perception is inherently multimodal. Visual speech (lip-reading) information is used...
Speech research during recent years has moved progressively away from its traditional focus on audit...
International audienceRecent magneto-encephalographic and electro-encephalographic studies provide e...
Speech perception in face-to-face conversation involves processing of speech sounds (auditory) and s...
Two experiments aimed to determine whether features of both the visual and acoustical inputs are alw...
International audienceWe investigated the existence of a cross-modal sensory gating reflected by the...
Do cross-modal interactions during speech perception only depend on well-known auditory and visuo-fa...
International audienceRecent neurophysiological studies demonstrate that audio-visual speech integra...
In face-to-face conversations, listeners process and combine speech information obtained from hearin...
Although the default state of the world is that we see and hear other people talking, there is evide...
While everyone has experienced that seeing lip movements may improve speech perception, little is kn...
The McGurk effect is a textbook illustration of the automaticity with which the human brain integrat...
International audienceIn pandemic times, when visual speech cues are masked, it becomes particularly...
Speech perception often benefits from vision of the speaker's lip movements when they are available....
The sight of a speaker’s facial movements during the perception of a spoken message can benefit spee...
ABSTRACT—Speech perception is inherently multimodal. Visual speech (lip-reading) information is used...
Speech research during recent years has moved progressively away from its traditional focus on audit...
International audienceRecent magneto-encephalographic and electro-encephalographic studies provide e...
Speech perception in face-to-face conversation involves processing of speech sounds (auditory) and s...
Two experiments aimed to determine whether features of both the visual and acoustical inputs are alw...
International audienceWe investigated the existence of a cross-modal sensory gating reflected by the...
Do cross-modal interactions during speech perception only depend on well-known auditory and visuo-fa...
International audienceRecent neurophysiological studies demonstrate that audio-visual speech integra...
In face-to-face conversations, listeners process and combine speech information obtained from hearin...
Although the default state of the world is that we see and hear other people talking, there is evide...
While everyone has experienced that seeing lip movements may improve speech perception, little is kn...
The McGurk effect is a textbook illustration of the automaticity with which the human brain integrat...
International audienceIn pandemic times, when visual speech cues are masked, it becomes particularly...