www.gelbukh.com We present a novel way of extracting fea-tures from short texts, based on the acti-vation values of an inner layer of a deep convolutional neural network. We use the extracted features in multimodal senti-ment analysis of short video clips repre-senting one sentence each. We use the combined feature vectors of textual, vis-ual, and audio modalities to train a classi-fier based on multiple kernel learning, which is known to be good at heteroge-neous data. We obtain 14 % performance improvement over the state of the art and present a parallelizable decision-level da-ta fusion method, which is much faster, though slightly less accurate.
In this paper we present the techniques used for the Uni-versity of Montréal’s team submissions to ...
We propose a method to automatically detect emotions in unconstrained settings as part of the 2013 E...
Tesis (Maestría en Ciencias de la Computación), Instituto Politécnico Nacional, CIC, 2017, 1 archivo...
The advent of the Social Web has enabled anyone with an Internet connection to easily create and sha...
Technology has enabled anyone with an Internet connection to easily create and share their ideas, op...
Emotion recognition has become one of the most researched subjects in the scientific community, espe...
We propose a model for carrying out deep learning based multimodal sentiment analysis. The MOUD data...
International audienceMultimodal neural network in sentiment analysis uses video, text and audio. Pr...
Abstract Automatic affect recognition is a challenging task due to the various modalities emotions ...
In this contribution, we investigate the effectiveness of deep fusion of text and audio features for...
We present our system description of input-levelmultimodal fusion of audio, video, and text forrecog...
Emotion recognition from speech may play a crucial role in many applications related to human–comput...
The fabulous results of Deep Convolution Neural Networks in computer vision and image analysis have ...
Multimodal sentiment analysis is an important research topic in the field of NLP, aiming to analyze ...
Multimodal language analysis often considers relationships between features based on text and those ...
In this paper we present the techniques used for the Uni-versity of Montréal’s team submissions to ...
We propose a method to automatically detect emotions in unconstrained settings as part of the 2013 E...
Tesis (Maestría en Ciencias de la Computación), Instituto Politécnico Nacional, CIC, 2017, 1 archivo...
The advent of the Social Web has enabled anyone with an Internet connection to easily create and sha...
Technology has enabled anyone with an Internet connection to easily create and share their ideas, op...
Emotion recognition has become one of the most researched subjects in the scientific community, espe...
We propose a model for carrying out deep learning based multimodal sentiment analysis. The MOUD data...
International audienceMultimodal neural network in sentiment analysis uses video, text and audio. Pr...
Abstract Automatic affect recognition is a challenging task due to the various modalities emotions ...
In this contribution, we investigate the effectiveness of deep fusion of text and audio features for...
We present our system description of input-levelmultimodal fusion of audio, video, and text forrecog...
Emotion recognition from speech may play a crucial role in many applications related to human–comput...
The fabulous results of Deep Convolution Neural Networks in computer vision and image analysis have ...
Multimodal sentiment analysis is an important research topic in the field of NLP, aiming to analyze ...
Multimodal language analysis often considers relationships between features based on text and those ...
In this paper we present the techniques used for the Uni-versity of Montréal’s team submissions to ...
We propose a method to automatically detect emotions in unconstrained settings as part of the 2013 E...
Tesis (Maestría en Ciencias de la Computación), Instituto Politécnico Nacional, CIC, 2017, 1 archivo...