Although the default state of the world is that we see and hear other people talking, there is evidence that seeing and hearing ourselves rather than someone else may lead to visual (i.e., lip-read) or auditory ‘self’ advantages. We assessed whether there is a ‘self’ advantage for phonetic recalibration (a lip-read driven cross-modal learning effect) and selective adaptation (a contrastive effect in the opposite direction of recalibration). We observed both aftereffects as well as an on-line effect of lip-read information on auditory perception (i.e., immediate capture), but there was no evidence for a ‘self’ advantage in any of the tasks (as additionally supported by Bayesian statistics). These findings strengthen the emerging notion that ...
The kinds of aftereffects, indicative of cross-modal recalibration, that are observed after exposure...
When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category b...
We assessed how synchronous speech listening and lipreading affects speech recognition in acoustic n...
Although the default state of the world is that we see and hear other people talking, there is evide...
Published: 22 March 2019.Although the default state of the world is that we see and hear other peopl...
Recently, we have shown that lipread speech can recalibrate auditory speech identification when ther...
When listeners experience difficulty in understanding a speaker, lexical and audiovisual (or lip-rea...
Research suggests that selective adaptation in speech is a low-level process dependent on sensory-sp...
Listeners quickly learn to label an ambiguous speech sound if there is lipread information that tell...
Exposure to incongruent auditory and visual speech produces both visual recalibration and selective ...
Speech perception often benefits from vision of the speaker's lip movements when they are available....
Repeated presentation of speech syllables can change identification of ambiguous syllables, a percep...
Speech perception often benefits from vision of the speaker's lip movements when they are available....
Lipreading can evoke an immediate bias on auditory phoneme perception [e.g. 6] and it can produce an...
International audienceLip reading is the ability to partially understand speech by looking at the sp...
The kinds of aftereffects, indicative of cross-modal recalibration, that are observed after exposure...
When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category b...
We assessed how synchronous speech listening and lipreading affects speech recognition in acoustic n...
Although the default state of the world is that we see and hear other people talking, there is evide...
Published: 22 March 2019.Although the default state of the world is that we see and hear other peopl...
Recently, we have shown that lipread speech can recalibrate auditory speech identification when ther...
When listeners experience difficulty in understanding a speaker, lexical and audiovisual (or lip-rea...
Research suggests that selective adaptation in speech is a low-level process dependent on sensory-sp...
Listeners quickly learn to label an ambiguous speech sound if there is lipread information that tell...
Exposure to incongruent auditory and visual speech produces both visual recalibration and selective ...
Speech perception often benefits from vision of the speaker's lip movements when they are available....
Repeated presentation of speech syllables can change identification of ambiguous syllables, a percep...
Speech perception often benefits from vision of the speaker's lip movements when they are available....
Lipreading can evoke an immediate bias on auditory phoneme perception [e.g. 6] and it can produce an...
International audienceLip reading is the ability to partially understand speech by looking at the sp...
The kinds of aftereffects, indicative of cross-modal recalibration, that are observed after exposure...
When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category b...
We assessed how synchronous speech listening and lipreading affects speech recognition in acoustic n...