Abstract—We present a method to classify fixed-duration windows of speech as expressing anger or not, which does not require speech recognition, utterance segmentation, or separating the utterances of different speakers and can, thus, be easily applied to real-world recordings. We also introduce the task of ranking a set of spoken dialogues by decreasing percentage of anger duration, as a step towards helping call center supervisors and analysts identify conversations requiring further action. Our work is among the very few attempts to detect emotions in spontaneous human-human dialogues recorded in call centers, as opposed to acted studio recordings or human-machine dialogues. We show that despite the non-perfect performance (approx. 70% a...
Humans can communicate their emotions by modulating facial expressions or the tone of their voice. A...
Most research that explores the emotional state of users of spoken dialog systems does not fully uti...
Humans can communicate their emotions by modulating facial expressions or the tone of their voice. A...
Machine learning researchers have dealt with the identification of emo- tional cues from speech sinc...
We investigate an affective saliency approach for speech emotion recognition of spoken dialogue utte...
Anger recognition in speech dialogue systems can help to enhance human commputer interaction. In thi...
This paper reports on the comparison between various acoustic feature sets and classification algori...
Abstract: We report on the progress with respect to an emotion-aware voice portal concerning several...
Abstract—We propose a novel real-time affect classification system based on features extracted from ...
In this paper, an emotion classification system based on speech signals is presented. The classifier...
The present study elaborates on the exploitation of both linguistic and acoustic feature modeling fo...
This paper describes a system that deploys acoustic and linguistic information from speech in order ...
Anger detection is a topic that is gaining more and more atten-tion with voice portal carriers, as i...
This paper investigates the effect of fixed point calculations on the accuracy of automatic emotion ...
The goals of this research were: (1) to develop a system that will automatically measure changes in ...
Humans can communicate their emotions by modulating facial expressions or the tone of their voice. A...
Most research that explores the emotional state of users of spoken dialog systems does not fully uti...
Humans can communicate their emotions by modulating facial expressions or the tone of their voice. A...
Machine learning researchers have dealt with the identification of emo- tional cues from speech sinc...
We investigate an affective saliency approach for speech emotion recognition of spoken dialogue utte...
Anger recognition in speech dialogue systems can help to enhance human commputer interaction. In thi...
This paper reports on the comparison between various acoustic feature sets and classification algori...
Abstract: We report on the progress with respect to an emotion-aware voice portal concerning several...
Abstract—We propose a novel real-time affect classification system based on features extracted from ...
In this paper, an emotion classification system based on speech signals is presented. The classifier...
The present study elaborates on the exploitation of both linguistic and acoustic feature modeling fo...
This paper describes a system that deploys acoustic and linguistic information from speech in order ...
Anger detection is a topic that is gaining more and more atten-tion with voice portal carriers, as i...
This paper investigates the effect of fixed point calculations on the accuracy of automatic emotion ...
The goals of this research were: (1) to develop a system that will automatically measure changes in ...
Humans can communicate their emotions by modulating facial expressions or the tone of their voice. A...
Most research that explores the emotional state of users of spoken dialog systems does not fully uti...
Humans can communicate their emotions by modulating facial expressions or the tone of their voice. A...