IntroductionSpeech communication is multi-sensory in nature. Seeing a speaker’s head and face movements may significantly influence the listeners’ speech processing, especially when the auditory information is not clear enough. However, research on the visual-auditory integration speech processing has left prosodic perception less well investigated than segmental perception. Furthermore, while native Japanese speakers tend to use less visual cues in segmental perception than in other western languages, to what extent the visual cues are used in Japanese focus perception by the native and non-native listeners remains unknown. To fill in these gaps, we test focus perception in Japanese among native Japanese speakers and Cantonese speakers who...
Cross-language McGurk Effects are used to investigate the locus of auditory-visual speech integratio...
The fact that “purely” prosodic marking of focus may be weaker in some languages than in others, and...
Cross-language McGurk Effects are used to investigate the locus of auditory–visual speech integratio...
International audienceAudition and vision are combined for the perception of speech segments and rec...
This study explores the contexts in which native Japanese listeners have difficulty identifying pros...
This study explored the contexts in which native Japanese listeners have difficulty identifying pros...
We naively believe that L1 is easier to hear than L2. Generally, this belief is correct, but not alw...
This study investigated language factors in the use of visual information in auditory-visual speech ...
This study explored the contexts in which native Japanese listeners have difficulty identifying pros...
This journal issue contain abstracts of the 5th ASA/ASJ Joint MeetingThis study explores the context...
We investigated how focus was prosodically realized in Taiwanese, Taiwan Mandarin and Beijing Mandar...
Contains fulltext : 159253.pdf (publisher's version ) (Open Access)To examine the ...
International audienceIn English and Japanese, the information-structural notion of contrastive focu...
The fact that “purely” prosodic marking of focus may be weaker in some languages than in others, and...
The McGurk effect paradigm was used to examine the developmental onset of inter-language differences...
Cross-language McGurk Effects are used to investigate the locus of auditory-visual speech integratio...
The fact that “purely” prosodic marking of focus may be weaker in some languages than in others, and...
Cross-language McGurk Effects are used to investigate the locus of auditory–visual speech integratio...
International audienceAudition and vision are combined for the perception of speech segments and rec...
This study explores the contexts in which native Japanese listeners have difficulty identifying pros...
This study explored the contexts in which native Japanese listeners have difficulty identifying pros...
We naively believe that L1 is easier to hear than L2. Generally, this belief is correct, but not alw...
This study investigated language factors in the use of visual information in auditory-visual speech ...
This study explored the contexts in which native Japanese listeners have difficulty identifying pros...
This journal issue contain abstracts of the 5th ASA/ASJ Joint MeetingThis study explores the context...
We investigated how focus was prosodically realized in Taiwanese, Taiwan Mandarin and Beijing Mandar...
Contains fulltext : 159253.pdf (publisher's version ) (Open Access)To examine the ...
International audienceIn English and Japanese, the information-structural notion of contrastive focu...
The fact that “purely” prosodic marking of focus may be weaker in some languages than in others, and...
The McGurk effect paradigm was used to examine the developmental onset of inter-language differences...
Cross-language McGurk Effects are used to investigate the locus of auditory-visual speech integratio...
The fact that “purely” prosodic marking of focus may be weaker in some languages than in others, and...
Cross-language McGurk Effects are used to investigate the locus of auditory–visual speech integratio...