In speech, listeners extract continuously-varying spectrotemporal cues from the acoustic signal to perceive discrete phonetic categories. Spectral cues are spatially encoded in the amplitude of responses in phonetically-tuned neural populations in auditory cortex. It remains unknown whether similar neurophysiological mechanisms encode temporal cues like voice-onset time (VOT), which distinguishes sounds like /b/ and/p/. We used direct brain recordings in humans to investigate the neural encoding of temporal speech cues with a VOT continuum from /ba/ to /pa/. We found that distinct neural populations respond preferentially to VOTs from one phonetic category, and are also sensitive to sub-phonetic VOT differences within a population's preferr...
Neural oscillations in auditory cortex are argued to support parsing and representing speech constit...
Human speech perception results from neural computations that transform external acoustic speech sig...
What is the neural representation of a speech code as it evolves in time? How do listeners integrate...
In speech, listeners extract continuously-varying spectrotemporal cues from the acoustic signal to p...
In speech, listeners extract continuously-varying spectrotemporal cues from the acoustic signal to p...
International audienceSpeech perception requires cortical mechanisms capable of analysing and encodi...
Sensory processing involves identification of stimulus features, but also integration with the surro...
& This study explored the neural systems underlying the perception of phonetic category structur...
& This study explored the neural systems underlying the perception of phonetic category structur...
During speech perception, linguistic elements such as consonants and vowels are extracted from a com...
Human speech has a unique capacity to carry and communicate rich meanings. However, it is not known ...
Spoken word recognition models and phonological theory propose that abstract features play a central...
Speech perception requires the rapid and effortless extraction of meaningful phonetic information fr...
Neural processing of sounds in the dorsal and ventral streams of the (human) auditory cortex is opti...
Speech is a complex acoustic signal showing a quasiperiodic structure at several timescales. Integra...
Neural oscillations in auditory cortex are argued to support parsing and representing speech constit...
Human speech perception results from neural computations that transform external acoustic speech sig...
What is the neural representation of a speech code as it evolves in time? How do listeners integrate...
In speech, listeners extract continuously-varying spectrotemporal cues from the acoustic signal to p...
In speech, listeners extract continuously-varying spectrotemporal cues from the acoustic signal to p...
International audienceSpeech perception requires cortical mechanisms capable of analysing and encodi...
Sensory processing involves identification of stimulus features, but also integration with the surro...
& This study explored the neural systems underlying the perception of phonetic category structur...
& This study explored the neural systems underlying the perception of phonetic category structur...
During speech perception, linguistic elements such as consonants and vowels are extracted from a com...
Human speech has a unique capacity to carry and communicate rich meanings. However, it is not known ...
Spoken word recognition models and phonological theory propose that abstract features play a central...
Speech perception requires the rapid and effortless extraction of meaningful phonetic information fr...
Neural processing of sounds in the dorsal and ventral streams of the (human) auditory cortex is opti...
Speech is a complex acoustic signal showing a quasiperiodic structure at several timescales. Integra...
Neural oscillations in auditory cortex are argued to support parsing and representing speech constit...
Human speech perception results from neural computations that transform external acoustic speech sig...
What is the neural representation of a speech code as it evolves in time? How do listeners integrate...