The goal of this work is to investigate audiovisual-to-articulatory inversion. It is well established that acoustic-to-articulatory inversion is an underdetermined problem. On the other hand, there is strong evidence that human speakers/listeners exploit the multimodality of speech, and more particularly the articulatory cues: the view of visible articulators, i.e. jaw and lips, improves speech intelligibility. It is thus interesting to add constraints provided by the direct visual observation of the speaker's face. Visible data was obtained by stereo-vision and enable the 3D recovery of jaw and lip movements. These data were processed to fit the nature of parameters of Maeda's articulatory model. Inversion experiments were conducted
International audienceThis paper deals with acoustic to articulatory inversion of speech by using an...
There is no single technique that will allow all relevant behaviour of the speech articulators (lips...
In this paper, we present the 3D acquisition infrastructure we developed for building a talking face...
International audienceWe present an inversion framework to identify speech production properties fro...
The goal of this work is to investigate audiovisual-to-articulatory inversion. It is well establishe...
International audienceIn this paper, we present the 3D acquisition infrastructure we developed for b...
International audienceIn the framework of experimental phonetics, our approach to the study of speec...
International audienceIn this study, previous articulatory midsagittal models of tongue and lips are...
International audienceThe objective of the presentation is to examine issues in constraining acousti...
International audienceOrofacial clones can display speech articulation in an augmented mode, i.e. di...
The authors present two visual articulation models for speech synthesis and methods to obtain them f...
International audienceThis works deals with the construction of articulatory models which can be eas...
International audienceIn audiovisual speech communication, the lower part of the face (mainly lips a...
International audienceThe article presents a method for adapting a GMM-based acoustic-articulatory i...
Several studies in the past have shown that the features based on the kinematics of speech articulat...
International audienceThis paper deals with acoustic to articulatory inversion of speech by using an...
There is no single technique that will allow all relevant behaviour of the speech articulators (lips...
In this paper, we present the 3D acquisition infrastructure we developed for building a talking face...
International audienceWe present an inversion framework to identify speech production properties fro...
The goal of this work is to investigate audiovisual-to-articulatory inversion. It is well establishe...
International audienceIn this paper, we present the 3D acquisition infrastructure we developed for b...
International audienceIn the framework of experimental phonetics, our approach to the study of speec...
International audienceIn this study, previous articulatory midsagittal models of tongue and lips are...
International audienceThe objective of the presentation is to examine issues in constraining acousti...
International audienceOrofacial clones can display speech articulation in an augmented mode, i.e. di...
The authors present two visual articulation models for speech synthesis and methods to obtain them f...
International audienceThis works deals with the construction of articulatory models which can be eas...
International audienceIn audiovisual speech communication, the lower part of the face (mainly lips a...
International audienceThe article presents a method for adapting a GMM-based acoustic-articulatory i...
Several studies in the past have shown that the features based on the kinematics of speech articulat...
International audienceThis paper deals with acoustic to articulatory inversion of speech by using an...
There is no single technique that will allow all relevant behaviour of the speech articulators (lips...
In this paper, we present the 3D acquisition infrastructure we developed for building a talking face...