Datasets used in the article "Shared Acoustic Codes Underlie Emotional Communication in Music and Speech - Evidence from Deep Transfer Learning" (Coutinho & Schuller, 2017). In that article four different data sets were used: SEMAINE, RECOLA, ME14 and MP (acronyms and datasets described below). The SEMAINE (speech) and ME14 (music) corpora were used for the unsupervised training of the Denoising Auto-encoders (domain adaptation stage) - only the audio features extracted from the audio files in these corpora were used and are provided in this repository. The RECOLA (speech) and MP (music) corpora were used for the supervised training phase - both the audio features extracted from the audio files and the Arousal and Valence annotations were u...
Emotions play a fundamental role in human communication. Particularly music and films are capable of...
The field of Music Emotion Recognition has become and established research sub-domain of Music Infor...
Accessing large, manually annotated audio databases in an effort to create robust models for emotion...
This repository contains the datasets used in the article "Shared Acoustic Codes Underlie Emotional ...
Datasets used in the article "Shared Acoustic Codes Underlie Emotional Communication in Music and Sp...
This repository contains the datasets used in the article "Shared Acoustic Codes Underlie Emotional ...
Music and speech exhibit striking similarities in the communication of emotions in the acoustic doma...
Music and speech exhibit striking similarities in the communication of emotions in the acoustic doma...
Without doubt, there is emotional information in almost any kind of sound received by humans every d...
Without doubt, there is emotional information in almost any kind of sound received by humans every d...
Emotions are essential for human communication as they reflect our inner states and influence our ac...
This report contains the supplementary material for the paper titled ‘On Acoustic Emotion Recognitio...
Obtaining large, human labelled speech datasets to train models for emotion recognition is a notorio...
In this study, we address emotion recognition using unsupervised feature learning from speech data, ...
Obtaining large, human labelled speech datasets to train models for emotion recognition is a notorio...
Emotions play a fundamental role in human communication. Particularly music and films are capable of...
The field of Music Emotion Recognition has become and established research sub-domain of Music Infor...
Accessing large, manually annotated audio databases in an effort to create robust models for emotion...
This repository contains the datasets used in the article "Shared Acoustic Codes Underlie Emotional ...
Datasets used in the article "Shared Acoustic Codes Underlie Emotional Communication in Music and Sp...
This repository contains the datasets used in the article "Shared Acoustic Codes Underlie Emotional ...
Music and speech exhibit striking similarities in the communication of emotions in the acoustic doma...
Music and speech exhibit striking similarities in the communication of emotions in the acoustic doma...
Without doubt, there is emotional information in almost any kind of sound received by humans every d...
Without doubt, there is emotional information in almost any kind of sound received by humans every d...
Emotions are essential for human communication as they reflect our inner states and influence our ac...
This report contains the supplementary material for the paper titled ‘On Acoustic Emotion Recognitio...
Obtaining large, human labelled speech datasets to train models for emotion recognition is a notorio...
In this study, we address emotion recognition using unsupervised feature learning from speech data, ...
Obtaining large, human labelled speech datasets to train models for emotion recognition is a notorio...
Emotions play a fundamental role in human communication. Particularly music and films are capable of...
The field of Music Emotion Recognition has become and established research sub-domain of Music Infor...
Accessing large, manually annotated audio databases in an effort to create robust models for emotion...