Exposure to ambiguous speech combined with clear lipread speech can recalibrate auditory speech identification, a phenomenon known as phonetic recalibration (Bertelson, Vroomen, & De Gelder, 2003). Here, we examined whether phonetic recalibration is spatially specific. Participants were presented an ambiguous auditory sound halfway between /b/ and /d/ (A?) combined with lipread /b/ or /d/ at either the left or right ear/side, and were subsequently tested with auditory-only test sounds at either the same or the opposite ear/side. Phonetic recalibration was always strongest if test sounds were presented at the same ear/side than if they were presented at a different ear/side. Phonetic recalibration thus has a spatial gradient, showing that st...
One of the most daunting tasks of a listener is to map a continuous auditory stream onto known speec...
The kinds of aftereffects, indicative of cross-modal recalibration, that are observed after exposure...
Although the default state of the world is that we see and hear other people talking, there is evide...
Listeners quickly learn to label an ambiguous speech sound if there is lipread information that tell...
Listeners use lexical or visual context information to recalibrate auditory speech perception. After...
Listeners adjust their phonetic categories to cope with variations in the speech signal (phonetic re...
The human brain can quickly adapt to changes in the environment. One example is phonetic recalibrati...
Recently, we have shown that lipread speech can recalibrate auditory speech identification when ther...
Listeners hearing an ambiguous speech sound flexibly adjust their phonetic categories in accordance ...
Listeners can flexibly adjust boundaries between phonemes when exposed to biased information. Ambigu...
Speech recognition starts with representations of basic acoustic perceptual features and ends by cat...
Listeners retune the boundaries between phonetic categories to adjust to individual speakers' produc...
When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category b...
When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category b...
One of the most daunting tasks of a listener is to map a continuous auditory stream onto known speec...
The kinds of aftereffects, indicative of cross-modal recalibration, that are observed after exposure...
Although the default state of the world is that we see and hear other people talking, there is evide...
Listeners quickly learn to label an ambiguous speech sound if there is lipread information that tell...
Listeners use lexical or visual context information to recalibrate auditory speech perception. After...
Listeners adjust their phonetic categories to cope with variations in the speech signal (phonetic re...
The human brain can quickly adapt to changes in the environment. One example is phonetic recalibrati...
Recently, we have shown that lipread speech can recalibrate auditory speech identification when ther...
Listeners hearing an ambiguous speech sound flexibly adjust their phonetic categories in accordance ...
Listeners can flexibly adjust boundaries between phonemes when exposed to biased information. Ambigu...
Speech recognition starts with representations of basic acoustic perceptual features and ends by cat...
Listeners retune the boundaries between phonetic categories to adjust to individual speakers' produc...
When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category b...
When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category b...
One of the most daunting tasks of a listener is to map a continuous auditory stream onto known speec...
The kinds of aftereffects, indicative of cross-modal recalibration, that are observed after exposure...
Although the default state of the world is that we see and hear other people talking, there is evide...