In this research, a study of cross-linguistic speech emotion recognition is performed. For this purpose, emotional data of different languages (English, Lithuanian, German, Spanish, Serbian, and Polish) are collected, resulting in a cross-linguistic speech emotion dataset with the size of more than 10.000 emotional utterances. Despite the bi-modal character of the databases gathered, our focus is on the acoustic representation only. The assumption is that the speech audio signal carries sufficient emotional information to detect and retrieve it. Several two-dimensional acoustic feature spaces, such as cochleagrams, spectrograms, mel-cepstrograms, and fractal dimension-based space, are employed as the representations of speech emotional feat...
Speech is an efficient agent to explicit attitude and emotions via language. The crucial task for th...
Abstract Recognizing emotion from speech has become one the active research themes in speech process...
In this paper, we present novel methods for estimating spon-taneously expressed emotions using audio...
Emotion recognition plays an important role in human-computer interaction. Previously and currently,...
Machine Learning (ML) algorithms within a human–computer framework are the leading force in speech e...
Affective computing is becoming increasingly significant in the interaction between humans and machi...
Emotion speech recognition is a developing field in machine learning. The main purpose of this field...
Affective computing studies and develops systems capable of detecting humans affects. The search for...
During the last 10–20 years, a great deal of new ideas have been proposed to improve the accuracy of...
In this thesis, we describe extensive experiments on the classification of emotions from speech usin...
Emotions have a crucial function in the mental existence of humans. They are vital for identifying a...
This work deals with the properties of the speech signal. At the beginning it introduces a process o...
This paper reports on mono- and cross-lingual performance of different acoustic and/or prosodic feat...
In early research the basic acoustic features were the primary choices for emotion recognition from ...
This research proposes a speech emotion recognition model to predict human emotions using the convol...
Speech is an efficient agent to explicit attitude and emotions via language. The crucial task for th...
Abstract Recognizing emotion from speech has become one the active research themes in speech process...
In this paper, we present novel methods for estimating spon-taneously expressed emotions using audio...
Emotion recognition plays an important role in human-computer interaction. Previously and currently,...
Machine Learning (ML) algorithms within a human–computer framework are the leading force in speech e...
Affective computing is becoming increasingly significant in the interaction between humans and machi...
Emotion speech recognition is a developing field in machine learning. The main purpose of this field...
Affective computing studies and develops systems capable of detecting humans affects. The search for...
During the last 10–20 years, a great deal of new ideas have been proposed to improve the accuracy of...
In this thesis, we describe extensive experiments on the classification of emotions from speech usin...
Emotions have a crucial function in the mental existence of humans. They are vital for identifying a...
This work deals with the properties of the speech signal. At the beginning it introduces a process o...
This paper reports on mono- and cross-lingual performance of different acoustic and/or prosodic feat...
In early research the basic acoustic features were the primary choices for emotion recognition from ...
This research proposes a speech emotion recognition model to predict human emotions using the convol...
Speech is an efficient agent to explicit attitude and emotions via language. The crucial task for th...
Abstract Recognizing emotion from speech has become one the active research themes in speech process...
In this paper, we present novel methods for estimating spon-taneously expressed emotions using audio...