We developed acoustic and lexical classifiers, based on a boosting algorithm, to assess the separability on arousal and valence dimensions in spontaneous emotional speech. The spontaneous emotional speech data was acquired by inviting subjects to play a first-person shooter video game. Our acoustic classifiers performed significantly better than the lexical classifiers on the arousal dimension. On the valence dimension, our lexical classifiers usually outperformed the acoustic classifiers. Finally, fusion between acoustic and lexical features on feature level did not always significantly improve classification performance. © 2008 Springer-Verlag Berlin Heidelberg
Affective computing is becoming increasingly significant in the interaction between humans and machi...
An essential step to achieving human-machine speech communication with the naturalness of communicat...
We present a speech signal driven emotion recognition sys-tem. Our system is trained and tested with...
In this thesis, we describe extensive experiments on the classification of emotions from speech usin...
In this paper, we describe emotion recognition experiments carried out for spontaneous affective spee...
In this paper, we describe emotion recognition experiments car- ried out for spontaneous affective s...
In this paper, we describe emotion recognition experiments carried out for spontaneous affective spe...
Contains fulltext : 91351.pdf (author's version ) (Open Access)In this paper, we d...
The automatic analysis of speech to detect affective states may improve the way users interact with ...
The automatic analysis of speech to detect affective states may improve the way users interact with ...
The automatic analysis of speech to detect affective states may improve the way users interact with ...
As automatic emotion recognition based on speech matures, new challenges can be faced. We therefore ...
We present a speech signal driven emotion recognition system. Our system is trained and tested with ...
This paper presents results from a study examining emotional speech using acoustic features and thei...
This paper presents robust recognition of a subset of emotions by animated agents from salient spoke...
Affective computing is becoming increasingly significant in the interaction between humans and machi...
An essential step to achieving human-machine speech communication with the naturalness of communicat...
We present a speech signal driven emotion recognition sys-tem. Our system is trained and tested with...
In this thesis, we describe extensive experiments on the classification of emotions from speech usin...
In this paper, we describe emotion recognition experiments carried out for spontaneous affective spee...
In this paper, we describe emotion recognition experiments car- ried out for spontaneous affective s...
In this paper, we describe emotion recognition experiments carried out for spontaneous affective spe...
Contains fulltext : 91351.pdf (author's version ) (Open Access)In this paper, we d...
The automatic analysis of speech to detect affective states may improve the way users interact with ...
The automatic analysis of speech to detect affective states may improve the way users interact with ...
The automatic analysis of speech to detect affective states may improve the way users interact with ...
As automatic emotion recognition based on speech matures, new challenges can be faced. We therefore ...
We present a speech signal driven emotion recognition system. Our system is trained and tested with ...
This paper presents results from a study examining emotional speech using acoustic features and thei...
This paper presents robust recognition of a subset of emotions by animated agents from salient spoke...
Affective computing is becoming increasingly significant in the interaction between humans and machi...
An essential step to achieving human-machine speech communication with the naturalness of communicat...
We present a speech signal driven emotion recognition sys-tem. Our system is trained and tested with...