International audienceIn this work, we address the prediction of speech articulators' temporal geometric position from the sequence of phonemes to be articulated. We start from a set of real-time MRI sequences uttered by a female French speaker. The contours of five articulators were tracked automatically in each of the frames in the MRI video. Then, we explore the capacity of a bidirectional GRU to correctly predict each articulator's shape and position given the sequence of phonemes and their duration. We propose a 5-fold cross-validation experiment to evaluate the generalization capacity of the model. In a second experiment, we evaluate our model's data efficiency by reducing training data. We evaluate the point-to-point Euclidean distan...
International audienceAcoustic simulations used in the articulatory synthesis of speech take a serie...
A new approach is proposed for quantifying the degree of articulator movement within a phoneme as a ...
International audienceThis study investigates the relation between para¬meters describing difference...
International audienceIn this work, we address the prediction of speech articulators' temporal geome...
International audienceArticulatory speech synthesis requires generating realistic vocal tract shapes...
International audienceIn order to study inter-speaker variability, this work aims to assess the gene...
A 3D physiological articulatory model constructed based on volumetric MRI data from a male speaker w...
International audienceWe introduce a method for predicting midsagittal contours of orofacial articul...
We introduce a method for predicting midsagittal contours of orofacial articulators from real-time M...
The traditional way of estimating the formant frequencies from articulatory data presupposes knowled...
International audienceThis paper uses mediosagittal slices of a static magnetic resonance imaging (M...
Magnetic resonance imaging (MRI) technology has facilitated capturing the dynamics of speech produc...
Speech production mechanisms can be characterized at a peripheral level by both their acoustic and a...
International audienceArticulatory synthesis allows the link between the temporal evolution of the V...
International audienceAcoustic simulations used in the articulatory synthesis of speech take a serie...
A new approach is proposed for quantifying the degree of articulator movement within a phoneme as a ...
International audienceThis study investigates the relation between para¬meters describing difference...
International audienceIn this work, we address the prediction of speech articulators' temporal geome...
International audienceArticulatory speech synthesis requires generating realistic vocal tract shapes...
International audienceIn order to study inter-speaker variability, this work aims to assess the gene...
A 3D physiological articulatory model constructed based on volumetric MRI data from a male speaker w...
International audienceWe introduce a method for predicting midsagittal contours of orofacial articul...
We introduce a method for predicting midsagittal contours of orofacial articulators from real-time M...
The traditional way of estimating the formant frequencies from articulatory data presupposes knowled...
International audienceThis paper uses mediosagittal slices of a static magnetic resonance imaging (M...
Magnetic resonance imaging (MRI) technology has facilitated capturing the dynamics of speech produc...
Speech production mechanisms can be characterized at a peripheral level by both their acoustic and a...
International audienceArticulatory synthesis allows the link between the temporal evolution of the V...
International audienceAcoustic simulations used in the articulatory synthesis of speech take a serie...
A new approach is proposed for quantifying the degree of articulator movement within a phoneme as a ...
International audienceThis study investigates the relation between para¬meters describing difference...