Adopting continuous dimensional annotations for affective analysis has been gaining rising attention by researchers over the past years. Due to the idiosyncratic nature of this problem, many subproblems have been identified, spanning from the fusion of multiple continuous annotations to exploiting output-correlations amongst emotion dimensions. In this paper, we firstly empirically answer several important questions which have found partial or no answer at all so far in related literature. In more detail, we study the correlation of each emotion dimension (i) with respect to other emotion dimensions, (ii) to basic emotions (e.g., happiness, anger). As a measure for comparison, we use video and audio features. Interestingly enough, we find t...
The differences between self-reported and observed emotion have only marginally been investigated in...
The problem of automatically estimating the interest level of a subject has been gaining attention b...
This paper presents our work on ACM MM Audio Visual Emotion Corpus 2014 (AVEC 2014) using the baseli...
Representation of facial expressions using continuous dimensions has shown to be inherently more exp...
Representation of facial expressions using continuous dimensions has shown to be inherently more exp...
This paper focuses on designing frameworks for automatic affect prediction and classification in dim...
A frequently used procedure to examine the relationship between categorical and dimensional descript...
Abstract—Past research in analysis of human affect has focused on recognition of prototypic expressi...
This paper investigates dimensional emotion prediction and classification from naturalistic facial e...
Past research in analysis of human affect has focused on recognition of prototypic expressions of si...
Many problems in machine learning and computer vision consist of predicting multi-dimensional output...
The automated analysis of affect has been gaining rapidly increasing attention by researchers over t...
Emotion recognition is an increasingly popular research topic in various fields, including human-com...
The size of easily-accessible libraries of digital music recordings is growing every day, and people...
Multimodal language analysis often considers relationships between features based on text and those ...
The differences between self-reported and observed emotion have only marginally been investigated in...
The problem of automatically estimating the interest level of a subject has been gaining attention b...
This paper presents our work on ACM MM Audio Visual Emotion Corpus 2014 (AVEC 2014) using the baseli...
Representation of facial expressions using continuous dimensions has shown to be inherently more exp...
Representation of facial expressions using continuous dimensions has shown to be inherently more exp...
This paper focuses on designing frameworks for automatic affect prediction and classification in dim...
A frequently used procedure to examine the relationship between categorical and dimensional descript...
Abstract—Past research in analysis of human affect has focused on recognition of prototypic expressi...
This paper investigates dimensional emotion prediction and classification from naturalistic facial e...
Past research in analysis of human affect has focused on recognition of prototypic expressions of si...
Many problems in machine learning and computer vision consist of predicting multi-dimensional output...
The automated analysis of affect has been gaining rapidly increasing attention by researchers over t...
Emotion recognition is an increasingly popular research topic in various fields, including human-com...
The size of easily-accessible libraries of digital music recordings is growing every day, and people...
Multimodal language analysis often considers relationships between features based on text and those ...
The differences between self-reported and observed emotion have only marginally been investigated in...
The problem of automatically estimating the interest level of a subject has been gaining attention b...
This paper presents our work on ACM MM Audio Visual Emotion Corpus 2014 (AVEC 2014) using the baseli...