The article describes a video-only speech recognition system for a “silent speech interface ” application, using ultrasound and optical images of the voice organ. A one-hour audio-visual speech corpus was phonetically labeled using an automatic speech alignment procedure and robust visual feature extraction techniques. HMM-based stochastic models were estimated separately on the visual and acoustic corpus. The performance of the visual speech recognition system is compared to a traditional acoustic-based recognizer. Index Terms: speech recognition, audio-visual speech description, silent speech interface, machine learnin
This thesis describes how an automatic lip reader was realized. Visual speech recognition is a preco...
International audienceThe article presents an HMM-based mapping approach for converting ultrasound a...
This paper discusses the use of surface electromyography for automatic speech recognition. Electromy...
International audienceThe development of a continuous visual speech recognizer for a silent speech i...
This article presents a framework for a phonetic vocoder driven by ultrasound and optical images of ...
Recent improvements are presented for phonetic decoding of continuous-speech from ultrasound and opt...
This article addresses synchronous acquisition of high-speed multimodal speech data, composed of ult...
Silent Speech Interfaces use data from the speech production process, such as visual information o...
This paper presents recent developments on our “silent speech interface ” that converts tongue and l...
International audienceSilent Speech Interfaces have been proposed for communication in silent condit...
The article presents an HMM-based mapping approach for converting ultrasound and video images of the...
International audienceThis article investigates the use of statistical mapping techniques for the co...
Abstract — Visual speech information from the speaker’s mouth region has been successfully shown to ...
With 7.5 million people unable to speak due to various physical and mental conditions, patients are ...
Abstract. The paper describes advances in the development of an ultrasound silent speech interface f...
This thesis describes how an automatic lip reader was realized. Visual speech recognition is a preco...
International audienceThe article presents an HMM-based mapping approach for converting ultrasound a...
This paper discusses the use of surface electromyography for automatic speech recognition. Electromy...
International audienceThe development of a continuous visual speech recognizer for a silent speech i...
This article presents a framework for a phonetic vocoder driven by ultrasound and optical images of ...
Recent improvements are presented for phonetic decoding of continuous-speech from ultrasound and opt...
This article addresses synchronous acquisition of high-speed multimodal speech data, composed of ult...
Silent Speech Interfaces use data from the speech production process, such as visual information o...
This paper presents recent developments on our “silent speech interface ” that converts tongue and l...
International audienceSilent Speech Interfaces have been proposed for communication in silent condit...
The article presents an HMM-based mapping approach for converting ultrasound and video images of the...
International audienceThis article investigates the use of statistical mapping techniques for the co...
Abstract — Visual speech information from the speaker’s mouth region has been successfully shown to ...
With 7.5 million people unable to speak due to various physical and mental conditions, patients are ...
Abstract. The paper describes advances in the development of an ultrasound silent speech interface f...
This thesis describes how an automatic lip reader was realized. Visual speech recognition is a preco...
International audienceThe article presents an HMM-based mapping approach for converting ultrasound a...
This paper discusses the use of surface electromyography for automatic speech recognition. Electromy...