In this paper, we classify speech into several emotional states based on the statistical properties of prosody features estimated on utter-ances extracted from Danish Emotional Speech (DES) and a subset of Speech Under Simulated and Actual Stress (SUSAS) data collec-tions. The proposed novelties are in: 1) speeding up the sequential oating feature selection up to 60%, 2) applying fusion of decisions taken on short speech segments in order to derive a unique decision for longer utterances, and 3) demonstrating that gender and accent information reduce the classication error. Indeed, a lower classi-cation error by 1 % to 11 % is achieved, when the combination of decisions is made on long phrases and an error reduction by 2%-11 % is obtained, ...
We developed acoustic and lexical classifiers, based on a boosting algorithm, to assess the separabi...
An essential step to achieving human-machine speech communication with the naturalness of communicat...
A study of the automatic discrimination of emo-tion in three different time windows of speech is pre...
Emotional speech recognition aims to automatically classify speech units (e.g., utterances) into emo...
Emotional speech classication can be treated as a supervised learning task where the statistical pro...
The automatic analysis of speech to detect affective states may improve the way users interact with ...
International audienceThe classification of emotional speech is a topic in speech recognition with m...
The intensive research of speech emotion recognition introduced a huge collection of speech emotion ...
Determination of an emotional state through speech increa-ses the amount of information associated w...
International audienceThe purpose of this paper is to make an automatic classification of speech int...
Intensive research of speech emotion recognition introduced a huge collection of speech emotion feat...
As automatic emotion recognition based on speech matures, new challenges can be faced. We therefore ...
The automatic analysis of speech to detect affective states may improve the way users interact with ...
Affective computing is becoming increasingly significant in the interaction between humans and machi...
The recognition of the internal emotional state of one person plays an important role in several hum...
We developed acoustic and lexical classifiers, based on a boosting algorithm, to assess the separabi...
An essential step to achieving human-machine speech communication with the naturalness of communicat...
A study of the automatic discrimination of emo-tion in three different time windows of speech is pre...
Emotional speech recognition aims to automatically classify speech units (e.g., utterances) into emo...
Emotional speech classication can be treated as a supervised learning task where the statistical pro...
The automatic analysis of speech to detect affective states may improve the way users interact with ...
International audienceThe classification of emotional speech is a topic in speech recognition with m...
The intensive research of speech emotion recognition introduced a huge collection of speech emotion ...
Determination of an emotional state through speech increa-ses the amount of information associated w...
International audienceThe purpose of this paper is to make an automatic classification of speech int...
Intensive research of speech emotion recognition introduced a huge collection of speech emotion feat...
As automatic emotion recognition based on speech matures, new challenges can be faced. We therefore ...
The automatic analysis of speech to detect affective states may improve the way users interact with ...
Affective computing is becoming increasingly significant in the interaction between humans and machi...
The recognition of the internal emotional state of one person plays an important role in several hum...
We developed acoustic and lexical classifiers, based on a boosting algorithm, to assess the separabi...
An essential step to achieving human-machine speech communication with the naturalness of communicat...
A study of the automatic discrimination of emo-tion in three different time windows of speech is pre...