This paper proposes three corpora of emotional speech in Japanese that maximize the expression of each emotion (expressing joy, anger, and sadness) for use with CHATR, the concatenative speech synthesis system being developed at ATR. A perceptual experiment was conducted using the synthesized speech generated from each emotion corpus and the results proved to be significantly identifiable. Authors ’ current work is to identify the local acoustic features relevant for specifying a particular emotion type. F0 and duration showed significant differences among emotion types. AV (amplitude of voicing source) and GN (glottal noise) also showed differences. This paper reports on the corpus design, the perceptual experiment, and the results of the ...
This paper proposes a system to convert neutral speech to emotional with controlled intensity of emo...
Studies in the perceptual identification of emotional states suggested that listeners seemed to depe...
In the present study, the relation between the semantic content and different acoustic parameters of...
There has been considerable research into perceptible correlates of emotional state, but a very limi...
In this study, we investigate acoustic properties of speech associ-ated with four different emotions...
UnrestrictedEmotions play an important role in human life. They are essential for communication, for...
In recent years the interest has grown, for automatically on the one hand detect and interpret emoti...
The MediaTeam Emotional Speech Corpus is currently the largest database of emotional speech for coll...
Work on facial expressions of emotions (Calder, Burton, Miller, Young, & Akamatsu, 2001) and emotion...
This study introduces a corpus of 260 naturalistic human nonlinguistic vocalizations representing ni...
Abstract This paper is related to the method of adding a emotional speech corpus to a high-quality l...
The ability of naive listener-judges to recognize the affective state of a speaker on the basis of n...
Modern speech synthesis systems with very high intelligibility are readily available in a number of ...
In emotional speech studies, it is well known that loudness, pitch, position and length of pauses, e...
With increased interest of human-computer/human-human interactions, systems deducing and identifying...
This paper proposes a system to convert neutral speech to emotional with controlled intensity of emo...
Studies in the perceptual identification of emotional states suggested that listeners seemed to depe...
In the present study, the relation between the semantic content and different acoustic parameters of...
There has been considerable research into perceptible correlates of emotional state, but a very limi...
In this study, we investigate acoustic properties of speech associ-ated with four different emotions...
UnrestrictedEmotions play an important role in human life. They are essential for communication, for...
In recent years the interest has grown, for automatically on the one hand detect and interpret emoti...
The MediaTeam Emotional Speech Corpus is currently the largest database of emotional speech for coll...
Work on facial expressions of emotions (Calder, Burton, Miller, Young, & Akamatsu, 2001) and emotion...
This study introduces a corpus of 260 naturalistic human nonlinguistic vocalizations representing ni...
Abstract This paper is related to the method of adding a emotional speech corpus to a high-quality l...
The ability of naive listener-judges to recognize the affective state of a speaker on the basis of n...
Modern speech synthesis systems with very high intelligibility are readily available in a number of ...
In emotional speech studies, it is well known that loudness, pitch, position and length of pauses, e...
With increased interest of human-computer/human-human interactions, systems deducing and identifying...
This paper proposes a system to convert neutral speech to emotional with controlled intensity of emo...
Studies in the perceptual identification of emotional states suggested that listeners seemed to depe...
In the present study, the relation between the semantic content and different acoustic parameters of...