Our approach to the problem of evaluating segmentations and transcriptions of speech data is presented. We developed an automatic pattern-matching procedure that relates different manual or automatic segmentations to each other. The comparison of segmentations refers to the degree of identity concerning the chosen labels and of identity of segment boundaries. As we exemplify our evaluation method on the basis of automatic transcriptions of the Munich AUtomatic Segmentation System (MAUS) that is currently being developed at the IPSK (Kipp et al. [4]) our data also give information on the quality of the system's segmentation and transcription performance. 1. INTRODUCTION For phonetic and phonologic investigations as well as for many ap...
A method for automatic segmentation of speech into phones is described. The incoming utterance is sp...
A speech synthesis system which operates by concatenation of acoustic units, needs to a database of ...
A speech synthesis system which operates by concatenation of acoustic units, needs to a database of ...
Our approach to the problem of evaluating segmentations and transcriptions of speech data is present...
Each time a word is uttered, even pronounced by one and the same speaker, its pronunciation can diff...
We describe the pronunciation model of the automatic segmen-tation technique MAUS based on a data-dr...
Each time a word is uttered, even pronounced by one and the same speaker, its pronunciation can diff...
Contains fulltext : 27415.pdf (publisher's version ) (Open Access)Each time a word...
In fundamental linguistic as well as in speech technology re search there is an increasing need for ...
Contains fulltext : 41404.pdf (publisher's version ) (Open Access)Each time a word...
We address the problem of estimating the quality of Automatic Speech Recognition (ASR) output at utt...
Introduction Segmentation is the division of a speech file into non-overlapping sections correspond...
A method for automatic segmentation of speech into phones is described. The incoming utterance is sp...
A method for automatic segmentation of speech into phones is described. The incoming utterance is sp...
A method for automatic segmentation of speech into phones is described. The incoming utterance is sp...
A method for automatic segmentation of speech into phones is described. The incoming utterance is sp...
A speech synthesis system which operates by concatenation of acoustic units, needs to a database of ...
A speech synthesis system which operates by concatenation of acoustic units, needs to a database of ...
Our approach to the problem of evaluating segmentations and transcriptions of speech data is present...
Each time a word is uttered, even pronounced by one and the same speaker, its pronunciation can diff...
We describe the pronunciation model of the automatic segmen-tation technique MAUS based on a data-dr...
Each time a word is uttered, even pronounced by one and the same speaker, its pronunciation can diff...
Contains fulltext : 27415.pdf (publisher's version ) (Open Access)Each time a word...
In fundamental linguistic as well as in speech technology re search there is an increasing need for ...
Contains fulltext : 41404.pdf (publisher's version ) (Open Access)Each time a word...
We address the problem of estimating the quality of Automatic Speech Recognition (ASR) output at utt...
Introduction Segmentation is the division of a speech file into non-overlapping sections correspond...
A method for automatic segmentation of speech into phones is described. The incoming utterance is sp...
A method for automatic segmentation of speech into phones is described. The incoming utterance is sp...
A method for automatic segmentation of speech into phones is described. The incoming utterance is sp...
A method for automatic segmentation of speech into phones is described. The incoming utterance is sp...
A speech synthesis system which operates by concatenation of acoustic units, needs to a database of ...
A speech synthesis system which operates by concatenation of acoustic units, needs to a database of ...