For speech emotion datasets, it has been difficult to acquire large quantities of reliable data and acted emotions may be over the top compared to less expressive emotions displayed in everyday life. Lately, larger datasets with natural emotions have been created. Instead of ignoring smaller, acted datasets, this study investigates whether information learnt from acted emotions is useful for detecting natural emotions. Cross-corpus research has mostly considered cross-lingual and even cross-age datasets, and difficulties arise from different methods of annotating emotions causing a drop in performance. To be consistent, four adult English datasets covering acted, elicited and natural emotions are considered. A state-of-the-art model is prop...
19th Annual Conference of the International-Speech-Communication-Association (INTERSPEECH 2018) -- A...
Recognizing emotions in spoken communication is crucial for advanced human-machine interaction. Curr...
More than a decade has passed since research on automatic recognition of emotion from speech has bec...
The majority of existing speech emotion recognition research focuses on automatic emotion detection ...
We explore possibilities for enhancing the generality, portability and robustness of emotion recogni...
Obtaining large, human labelled speech datasets to train models for emotion recognition is a notorio...
Obtaining large, human labelled speech datasets to train models for emotion recognition is a notorio...
Machine Learning (ML) algorithms within a human–computer framework are the leading force in speech e...
To date, several methods have been explored for the challenging task of cross-language speech emotio...
We use four speech databases with realistic, non-prompted emotions, and a large state-of-the-art aco...
We study the cross-database speech emotion recognition based on online learning. How to apply a clas...
This study introduces a corpus of 260 naturalistic human nonlinguistic vocalizations representing ni...
Language-based emotion analysis finds itself in a paradoxical situation. In the past decades, a plet...
In this paper, we focus on a challenging, but interesting, task in speech emotion recognition (SER),...
Despite the recent advancement in speech emotion recognition (SER) within a single corpus setting, t...
19th Annual Conference of the International-Speech-Communication-Association (INTERSPEECH 2018) -- A...
Recognizing emotions in spoken communication is crucial for advanced human-machine interaction. Curr...
More than a decade has passed since research on automatic recognition of emotion from speech has bec...
The majority of existing speech emotion recognition research focuses on automatic emotion detection ...
We explore possibilities for enhancing the generality, portability and robustness of emotion recogni...
Obtaining large, human labelled speech datasets to train models for emotion recognition is a notorio...
Obtaining large, human labelled speech datasets to train models for emotion recognition is a notorio...
Machine Learning (ML) algorithms within a human–computer framework are the leading force in speech e...
To date, several methods have been explored for the challenging task of cross-language speech emotio...
We use four speech databases with realistic, non-prompted emotions, and a large state-of-the-art aco...
We study the cross-database speech emotion recognition based on online learning. How to apply a clas...
This study introduces a corpus of 260 naturalistic human nonlinguistic vocalizations representing ni...
Language-based emotion analysis finds itself in a paradoxical situation. In the past decades, a plet...
In this paper, we focus on a challenging, but interesting, task in speech emotion recognition (SER),...
Despite the recent advancement in speech emotion recognition (SER) within a single corpus setting, t...
19th Annual Conference of the International-Speech-Communication-Association (INTERSPEECH 2018) -- A...
Recognizing emotions in spoken communication is crucial for advanced human-machine interaction. Curr...
More than a decade has passed since research on automatic recognition of emotion from speech has bec...