<p>(A) Joint modulation selective filters. (A1) Joint frequency specific: the spectrogram is filtered with a bank of modulation selective filters at different spectral modulations (Ω), temporal modulations (ω), and direction (upwards/downwards). The output of the filter bank is averaged across time and direction to yield a reduced representation of modulation energy as a function of Ω, ω, and frequency. The joint frequency-specific MTF-based model predicts that fMRI responses vary linearly with this representation, i.e. sounds that differ with respect to any of the three dimensions will elicit different responses. (A2) Joint frequency non-specific: the 3D modulation representation is averaged across frequency to yield a global measure of mo...
Neural processing of sounds in the dorsal and ventral streams of the (human) auditory cortex is opti...
Copyright © 2013 Daniel E. Rio et al. This is an open access article distributed under the Creative ...
(A) Responses to natural and model-matched sounds from two example voxels from a single subject. One...
(A) The model consists of two cascaded stages of filtering. In the first stage, a cochleagram is com...
<p>(A) The input spectrogram (top left) is transformed by a linear modulation filter bank (right) fo...
<p>(A–B) Left: MTFs as estimated by the joint frequency-specific MTF-based model. The color code ind...
(A) Top: Waveform of the entire song stimulus. Participants listened to a 190.72-second rock song (A...
We systematically determined which spectrotemporal modulations in speech are necessary for comprehen...
A multi-channel model, describing the effects of spectral and temporal integration in amplitude-modu...
We systematically determined which spectrotemporal modulations in speech are necessary for comprehen...
(A) The logic of the model-matching procedure, as applied to fMRI. The models we consider are define...
We systematically determined which spectrotemporal modulations in speech are necessary for comprehen...
Frequency modulation (FM) is a basic constituent of vocalisation in many animals as well as in human...
In current models of modulation perception, the stimuli are first filtered and nonlinearly transform...
<p><b>A.</b> Data for each neuron were split into an estimation data set, used to fit model paramete...
Neural processing of sounds in the dorsal and ventral streams of the (human) auditory cortex is opti...
Copyright © 2013 Daniel E. Rio et al. This is an open access article distributed under the Creative ...
(A) Responses to natural and model-matched sounds from two example voxels from a single subject. One...
(A) The model consists of two cascaded stages of filtering. In the first stage, a cochleagram is com...
<p>(A) The input spectrogram (top left) is transformed by a linear modulation filter bank (right) fo...
<p>(A–B) Left: MTFs as estimated by the joint frequency-specific MTF-based model. The color code ind...
(A) Top: Waveform of the entire song stimulus. Participants listened to a 190.72-second rock song (A...
We systematically determined which spectrotemporal modulations in speech are necessary for comprehen...
A multi-channel model, describing the effects of spectral and temporal integration in amplitude-modu...
We systematically determined which spectrotemporal modulations in speech are necessary for comprehen...
(A) The logic of the model-matching procedure, as applied to fMRI. The models we consider are define...
We systematically determined which spectrotemporal modulations in speech are necessary for comprehen...
Frequency modulation (FM) is a basic constituent of vocalisation in many animals as well as in human...
In current models of modulation perception, the stimuli are first filtered and nonlinearly transform...
<p><b>A.</b> Data for each neuron were split into an estimation data set, used to fit model paramete...
Neural processing of sounds in the dorsal and ventral streams of the (human) auditory cortex is opti...
Copyright © 2013 Daniel E. Rio et al. This is an open access article distributed under the Creative ...
(A) Responses to natural and model-matched sounds from two example voxels from a single subject. One...