Music tags are commonly used to describe and categorize music. Various auto-tagging models and datasets have been proposed for the automatic music annotation with tags. However, the past approaches often neglect the fact that many of these tags largely depend on the user, especially the tags related to the context of music listening. In this paper, we address this problem by proposing a user-aware music auto-tagging system and evaluation protocol. Specifically, we use both the audio content and user information extracted from the user listening history to predict contextual tags for a given user/track pair. We propose a new dataset of music tracks annotated with contextual tags per user. We compare our model to the traditional audio-based m...
Algorithms for automatic playlist generation solve the problem of tedious and time consuming manual ...
Music recommender systems can offer users personalized and contextualized recommendation and are the...
Abstract. Contextual information of the listener is only slowly being integrated into music retrieva...
This is a user-aware music dataset labeled with the contextual use of each track according to each u...
The dataset is composed of 15 contextual tags extracted based on user's usage through created playli...
As music has become more available especially on music streaming platforms, people have started to h...
Music auto-tagging refers to automatically assigning seman-tic labels (tags) such as genre, mood and...
The rise of digital music has led to a parallel rise in the need to manage music collections of seve...
Music libraries are constantly growing, often tagged in relation to its instrumentation or artist. A...
As music distribution has evolved form physical media to digital content, tens of millions of songs ...
Visualizing audio signals during playback has long been a fundamental function of music players. How...
This paper examines the use of two kinds of context to improve the results of content-based music ta...
Modern society has drastically changed the way it consumes music. During these last recent years, li...
International audienceAutomatic music classification aims at grouping unknown songs in predefined ca...
This paper presents the MusiClef data set, a multimodal data set of professionally annotated music. ...
Algorithms for automatic playlist generation solve the problem of tedious and time consuming manual ...
Music recommender systems can offer users personalized and contextualized recommendation and are the...
Abstract. Contextual information of the listener is only slowly being integrated into music retrieva...
This is a user-aware music dataset labeled with the contextual use of each track according to each u...
The dataset is composed of 15 contextual tags extracted based on user's usage through created playli...
As music has become more available especially on music streaming platforms, people have started to h...
Music auto-tagging refers to automatically assigning seman-tic labels (tags) such as genre, mood and...
The rise of digital music has led to a parallel rise in the need to manage music collections of seve...
Music libraries are constantly growing, often tagged in relation to its instrumentation or artist. A...
As music distribution has evolved form physical media to digital content, tens of millions of songs ...
Visualizing audio signals during playback has long been a fundamental function of music players. How...
This paper examines the use of two kinds of context to improve the results of content-based music ta...
Modern society has drastically changed the way it consumes music. During these last recent years, li...
International audienceAutomatic music classification aims at grouping unknown songs in predefined ca...
This paper presents the MusiClef data set, a multimodal data set of professionally annotated music. ...
Algorithms for automatic playlist generation solve the problem of tedious and time consuming manual ...
Music recommender systems can offer users personalized and contextualized recommendation and are the...
Abstract. Contextual information of the listener is only slowly being integrated into music retrieva...