Speech processing has been shown as a cross modal interaction of different sensory systems. The present study explored cross modal perceptions in speech, particularly facial musculature in vowel categorization, discrimination and audiovisual processing. The study hypothesized that facial musculature congruence would bias perception of speech towards assigned lip condition and thus influencing speech perception. The study used a between-subjects design (Lip-rounding x Lip-spreading x Neutral Lip) with 60 participants. Participants were given audio identity tasks, audio discrimination tasks and audiovisual tasks to investigate individual components of speech perception. The purpose of the study was masked using a cover story. This study found...
AbstractWe perceive identity, expression and speech from faces. While perception of identity and exp...
Perception of speech sounds is affected by observing facial motion. Incongruence between speech soun...
The study examined whether people can extract speech related information from the talker's upper fac...
Speech research during recent years has moved progressively away from its traditional focus on audit...
Do cross-modal interactions during speech perception only depend on well-known auditory and visuo-fa...
International audienceOrofacial somatosensory inputs modify the perception of speech sounds. Such au...
ABSTRACT—Speech perception is inherently multimodal. Visual speech (lip-reading) information is used...
Speech perception in face-to-face conversation involves processing of speech sounds (auditory) and s...
International audienceThis behavioral study shows for the first time that the auditory perception of...
We investigated the effects of adaptation to mouth shapes associated with different spoken sounds (s...
International audienceIn the last years, brain areas involved in the planning and execution of speec...
Two experiments aimed to determine whether features of both the visual and acoustical inputs are alw...
Myriad factors influence perceptual processing, but “embodied” approaches assert that sensorimotor i...
International audienceOrofacial somatosensory inputs modify the perception of speech sounds (Ito et ...
Speech perception often benefits from vision of the speaker's lip movements when they are available....
AbstractWe perceive identity, expression and speech from faces. While perception of identity and exp...
Perception of speech sounds is affected by observing facial motion. Incongruence between speech soun...
The study examined whether people can extract speech related information from the talker's upper fac...
Speech research during recent years has moved progressively away from its traditional focus on audit...
Do cross-modal interactions during speech perception only depend on well-known auditory and visuo-fa...
International audienceOrofacial somatosensory inputs modify the perception of speech sounds. Such au...
ABSTRACT—Speech perception is inherently multimodal. Visual speech (lip-reading) information is used...
Speech perception in face-to-face conversation involves processing of speech sounds (auditory) and s...
International audienceThis behavioral study shows for the first time that the auditory perception of...
We investigated the effects of adaptation to mouth shapes associated with different spoken sounds (s...
International audienceIn the last years, brain areas involved in the planning and execution of speec...
Two experiments aimed to determine whether features of both the visual and acoustical inputs are alw...
Myriad factors influence perceptual processing, but “embodied” approaches assert that sensorimotor i...
International audienceOrofacial somatosensory inputs modify the perception of speech sounds (Ito et ...
Speech perception often benefits from vision of the speaker's lip movements when they are available....
AbstractWe perceive identity, expression and speech from faces. While perception of identity and exp...
Perception of speech sounds is affected by observing facial motion. Incongruence between speech soun...
The study examined whether people can extract speech related information from the talker's upper fac...