Advances in automated speech recognition significantly accelerated the automation of contact centers, thus creating a need for robust Speech Emotion Recognition (SER) as an integral part of customer net promoter score measuring. However, to train a specific language, a specifically labeled dataset of emotions should be available, a significant limitation. Emotion detection datasets cover only English, German, Mandarin, and Indian. We have shown by results difference between predicting two and four emotions, which leads us to narrow down datasets to particular practical use cases rather than train the model on the whole given dataset. We identified that if emotion transfers good enough from source language to target language, it reflects the...
In this thesis, we design a common questionnaire of stress-inducing and non-stress-inducing question...
This study reports experimental results on whether the acoustic realization of vocal emotions differ...
While approaches on automatic recognition of human emotion from speech have already achieved reasona...
In this study, we address emotion recognition using unsupervised feature learning from speech data, ...
The majority of existing speech emotion recognition research focuses on automatic emotion detection ...
Affective speech-to-speech translation (S2ST) is to preserve the affective state conveyed in the spe...
Commonalities and differences of human perception for perceiving emotions in speech among different ...
This paper reports on mono- and cross-lingual performance of different acoustic and/or prosodic feat...
Most of the previous studies on Speech-to-Speech Translation (S2ST) focused on processing of linguis...
This paper reports on mono- and cross-lingual performance of different acoustic and/or prosodic feat...
To date, several methods have been explored for the challenging task of cross-language speech emotio...
Emotion recognition plays an important role in human-computer interaction. Previously and currently,...
Humans sense, perceive, and convey emotion differently from each other due to physical, psychologica...
Machine Learning (ML) algorithms within a human–computer framework are the leading force in speech e...
In a conventional speech emotion recognition (SER) task, a classifier for a given language is traine...
In this thesis, we design a common questionnaire of stress-inducing and non-stress-inducing question...
This study reports experimental results on whether the acoustic realization of vocal emotions differ...
While approaches on automatic recognition of human emotion from speech have already achieved reasona...
In this study, we address emotion recognition using unsupervised feature learning from speech data, ...
The majority of existing speech emotion recognition research focuses on automatic emotion detection ...
Affective speech-to-speech translation (S2ST) is to preserve the affective state conveyed in the spe...
Commonalities and differences of human perception for perceiving emotions in speech among different ...
This paper reports on mono- and cross-lingual performance of different acoustic and/or prosodic feat...
Most of the previous studies on Speech-to-Speech Translation (S2ST) focused on processing of linguis...
This paper reports on mono- and cross-lingual performance of different acoustic and/or prosodic feat...
To date, several methods have been explored for the challenging task of cross-language speech emotio...
Emotion recognition plays an important role in human-computer interaction. Previously and currently,...
Humans sense, perceive, and convey emotion differently from each other due to physical, psychologica...
Machine Learning (ML) algorithms within a human–computer framework are the leading force in speech e...
In a conventional speech emotion recognition (SER) task, a classifier for a given language is traine...
In this thesis, we design a common questionnaire of stress-inducing and non-stress-inducing question...
This study reports experimental results on whether the acoustic realization of vocal emotions differ...
While approaches on automatic recognition of human emotion from speech have already achieved reasona...