Note: This bibliographic page is archived and will no longer be updated. For an up-to-date list of publications from the Music Technology Group see the Publications list .

Mood Cloud 2.0: Music Mood Browsing based on Social Networks

Title Mood Cloud 2.0: Music Mood Browsing based on Social Networks
Publication Type Conference Paper
Year of Publication 2009
Conference Name International Society for Music Information Research Conference (ISMIR)
Authors Laurier, C. , Sordo M. , & Herrera P.
Conference Start Date 26/10/2009
Conference Location Kobe, Japan
Abstract
This paper presents Mood Cloud 2.0, an application that allows to visualize and browse music by mood. With the first version of Mood Cloud, we could visualize in realtime the mood prediction of different Support Vector Machine models (one for each 'basic' mood). This helped to understand how well we can predict the mood evolution in time. Version 2.0 enables a new 2D visualization based social network data and adds retrieval features. In this representation, we can visualize one's collection, observe the mood evolution of a song in time, and draw a path to make a playlist or retrieve a song based its time evolution. This 2D space is flexible, one can choose between different templates. the most innovative one being the representation extracted from social networks called semantic mood space. The 2D semantic mood space was obtained using Self-Organizing Maps on tag data from Last.fm. Each song of one's collection is mapped into the semantic mood space using its tags. Other modes and representations are proposed. If the tags are not available, we can use the autotagger function, which automatically adds tag to the piece and so place it in the semantic space. This technique is also used to evaluate the mood evolution of one song dividing it in segments of a few seconds. Additionally, pre-computed audio mood models are available (the updated models from Mood Cloud 1.0), which are state-of-the-art mood classification algorithms. For these models, the 2D representation can be changed using different axis.  We allow the user to change the two dimensions, selecting between the existing audio models in Mood Cloud 1.0 (happy, sad, aggressive, relax and party). One can visualize his collection in the agressive/sad or relaxed/happy spaces for instance. With both the autotagger and the mood models, any collection can be mapped and browsed into a 2D space. By analyzing the songs in windows of a few seconds, we can visualize, in the same space, the instantaneous mood and its evolution during the song. Finally drawing a path into that space can be used to make a playlist or to search for a song with this particular mood evolution in time.
preprint/postprint document http://mtg.upf.edu/system/files/publications/Laurier-ISMIR-LBD-2009.pdf