In this paper, we tackle the problem of emotion tagging of multimedia data by modeling the dependencies among multiple emotions in both the feature and label spaces. These dependencies, which carry crucial top-down and bottom-up evidence for improving multimedia affective content analysis, have not been thoroughly exploited yet. To this end, we propose two hierarchical models that independently and dependently learn the shared features and global semantic relationships among emotion labels to jointly tag multiple emotion labels of multimedia data. Efficient learning and inference algorithms of the proposed models are also developed. Experiments on three benchmark emotion databases demonstrate the superior performance of our methods to exis...
Emotion lexica are commonly used resources to combat data poverty in automatic emotion detection. Ho...
In many online news services, users often write comments towards news in subjective emotions such as...
ABSTRACT: We propose a contextual user-emotion-based analysis (CUEBA) approach to analysing multimed...
As an important research issue in affective computing community, multi-modal emotion recognition has...
Textual emotion detection is an attractive task while previous studies mainly focused on polarity or...
To capture variation in categorical emotion recognition by human perceivers, we propose a multi-labe...
Identifying multiple emotions in a sentence is an important research topic. Existing methods usually...
Integrating media elements of various medium, multimedia is capable of expressing complex informatio...
Multi-modal Multi-label Emotion Recognition (MMER) aims to identify various human emotions from hete...
Emotion annotations are important metadata for narrative texts in digital libraries. Such annotation...
The current multi-class emotion classification studies mainly focus on enhancing word-level and sent...
Multimedia data are usually represented by multiple features. In this paper, we propose a new algori...
Affective image understanding has been extensively studied in the last decade since more and more u...
Affective image understanding has been extensively studied in the last decade since more and more us...
Emotions that are elicited in response to a video scene contain valuable information for multimedia ...
Emotion lexica are commonly used resources to combat data poverty in automatic emotion detection. Ho...
In many online news services, users often write comments towards news in subjective emotions such as...
ABSTRACT: We propose a contextual user-emotion-based analysis (CUEBA) approach to analysing multimed...
As an important research issue in affective computing community, multi-modal emotion recognition has...
Textual emotion detection is an attractive task while previous studies mainly focused on polarity or...
To capture variation in categorical emotion recognition by human perceivers, we propose a multi-labe...
Identifying multiple emotions in a sentence is an important research topic. Existing methods usually...
Integrating media elements of various medium, multimedia is capable of expressing complex informatio...
Multi-modal Multi-label Emotion Recognition (MMER) aims to identify various human emotions from hete...
Emotion annotations are important metadata for narrative texts in digital libraries. Such annotation...
The current multi-class emotion classification studies mainly focus on enhancing word-level and sent...
Multimedia data are usually represented by multiple features. In this paper, we propose a new algori...
Affective image understanding has been extensively studied in the last decade since more and more u...
Affective image understanding has been extensively studied in the last decade since more and more us...
Emotions that are elicited in response to a video scene contain valuable information for multimedia ...
Emotion lexica are commonly used resources to combat data poverty in automatic emotion detection. Ho...
In many online news services, users often write comments towards news in subjective emotions such as...
ABSTRACT: We propose a contextual user-emotion-based analysis (CUEBA) approach to analysing multimed...