|Abstract||Digital music is becoming a major part of the user experience with computers and mobile devices. Automatically organizing this content is a huge challenge. In this work, we focus on automatically classifying music by mood. For this purpose, we propose computational models using information extracted from the audio signal. The foundations of such algorithms are based on techniques from the fields of signal processing, machine learning and information retrieval. First, by studying the tagging behavior of a music social network with dimensionality reduction techniques, we find a representation model for mood in music. We believe that this methodology can be applied to other domains as well. Then, we propose a method for automatic music mood classification and detail the results for different types of classifiers. We analyze the contributions of audio descriptors and how their values are related to the observed mood, trying to find explanation from psychology and musicology.
We also propose a multimodal version of our algorithm using lyrics information, contributing to the field of text retrieval with a new model based on key words differentiating categories.
Moreover, after showing the relation between mood and genre, we present a new approach using automatic music genre classification. We demonstrate that genre-based mood classifiers give higher accuracies than standard audio models. Finally, we propose a rule extraction technique to explicit the strategy behind our models. This method allows us to make sense of the classifiers and to understand how they can predict the musical mood. All the proposed algorithms are evaluated with user data. Our audio based approaches, adapted to the context, have been evaluated in international evaluation campaigns.