News and Events

Article in the Information Processing & Management journal

A group of researchers from the Sound and Music Description research area @MTG (Dmitry Bogdanov, Martín Haro, Ferdinand Fuhrmann, Emilia Gómez and Perfecto Herrera), in collaboration with Open University (Anna Xambó) are publishing a paper on music recommendation and music preference visualization at the Information Processing & Management journal edited by Elsevier.

This work is part of their “The Musical Avatar“ project, a system that provides an iconic representation of one's musical preferences. The idea behind is to use computational tools to automatically describe your music (in audio format) in terms of melody, instrumentation, rhythm, etc and use this information to build an iconic representation of one’s musical preferences and to recommend you new music. All the system is only based on content description, i.e. on the signal itself and not on information about the music (context) as found on web sites, etc.

This is the reference:

The paper is also available at the MTG web page

29 Nov 2012 - 11:45 | view
Seminar by Dan Stowell on tracking sound sources in noise
22 Nov 2012

Dan Stowell, from Queen Mary, University of London, will give a seminar on "Tracking multiple intermittent sources in noise: inferring a mixture of Markov renewal processes" on Thursday November 22nd at 3:30pm in room 52.321.

Abstract: Consider the sound of birdsong, or footsteps. They are intermittent sounds, having as much structure in the gaps between events as in the events themselves. And often there's more than one bird, or more than one person - so the sound is a mixture of intermittent sources. Standard tracking techniques (e.g. Markov models, autoregressive models) are a poor fit to such situations. We describe a simple signal model (the Markov renewal process (MRP)) for these intermittent data, and introduce a novel inference technique that can infer the presence of multiple MRPs even in heavy noise. We illustrate the technique via a simulation of auditory streaming phenomena, and an experiment to track a mixture of singing birds.

19 Nov 2012 - 11:55 | view
The MTG takes part in "Programa Professors i Ciència" (Fundació Catalunya Caixa)

The MTG collaborates in the "Programa Professors i Ciència" (Teachers & Science program), funded by Fundació Catalunya Caixa.

The program offers high-school teachers the opportunity of taking part in scientific specialization courses at research centers in Catalonia. In this way, the program aims to bring research closer to educational institutions at the secondary level. The MTG organizes a course on sound & nature that takes place  at Poblenou Campus on November 9th and 16th, 2012.

The MTG organizes a workshop on "Sounds of Nature: The Nature of Sound" which is devoted to study natural sounds, their acoustic behavior and how to describe and generate them by means of a computer. The course provides a specific set of educational resources based on free tools and sounds so that the workshop content can be directly used in educational contexts. MTG researchers involved in this initiative are Agustín Martorell, Sonia Espí, Jaume Ferrete and Emilia Gómez.

13 Nov 2012 - 15:13 | view
MusicBrainz Summit at the UPF

The MTG-UPF will be hosting the 12th MusicBrainz Summit on November 9-11, 2012. MusicBrainz is an open music encyclopedia that collects music metadata and that the CompMusic project uses to collect all the metadata of the music collections that are being studied.

The MusicBrainz Summit is a meeting of the editors and developers of MusicBrainz to discuss its future and to do some group hacking. To learn more about this MusicBrainz Summit visit the official website.

7 Nov 2012 - 10:21 | view
HPCP vamp plug-in available for download!

Following the great success of our MELODIA - Melody Extraction vamp plugin, we are very pleased to announce the launch of the HPCP - Harmonic Pitch Class Profile vamp plug-in.

The plug-in provides a simple implementation of our chroma feature extraction algorithm which has been used in different applications, e.g. chord detection, key estimation, cover version identification and music classification. Full details of the algorithm can be found in the following papers:

NOTE: The main difference between this implementation and the original algorithm is that this implementation does not perform automatic tuning frequency estimation. The reference tuning frequency is defined as an input parameter.

The plug-in is available online for free download (non-commercial purposes). We hope it will serve the research community for evaluating different approaches for chroma feature extraction and for its further exploitation in higher-level music information retrieval tasks.

We are very interested in receiving feedback from the community, please let us know what you think!

19 Oct 2012 - 13:07 | view
Seminar by T. V. Sreenivas on Stochastic approaches to Music/Speech modeling

Title: "Stochastic approaches to Music/Speech modeling" by T.V. Sreenivas (Indian Institute of Science, Bangalore, India)

When and where? Tuesday October 23rd, 3:30pm in room 55.309

Abstract: Among the most prolific of signals that we deal with are speech and music, one being information rich and the other emotion (feelings) rich, along with sharing some of the characteristics between each other. Both types of signals are highly dynamic in nature, exhibiting a lot of variability due to individual characteristics of expression and style, in spite of underlying structural conventions. Stochastic models have been very successful in representing such variability in the signal patterns, along with structural variability also (as seen in speech models). Indian art-music (classical) is considered very structured and also practiced with high rigor, along with certain freedom for individual artistic expression. We examine the stochastic approaches in the literature to analyze Indian art-music and present our approach to estimate /shadja/, /swara/ and /rAga/, in an unsupervised manner. Through these models we draw parallels between the structure of speech and music signals and aim to explore the cognitive differences in the learning of speech and music.

16 Oct 2012 - 08:52 | view
Melody Extraction vamp plug-in available online!

We are very pleased to announce the launch of the MELODIA - Melody Extraction vamp plug-in during the ISMIR 2012 Conference which will take place in Porto, Portugal, October 8th-12th, 2012.

This plug-in implements our melody extraction algorithm which obtained good results in last year's MIREX Audio Melody Extraction campaign. Full details of the algorithm are available in:

J. Salamon and E. Gómez, "Melody Extraction from Polyphonic Music Signals using Pitch Contour Characteristics", IEEE Transactions on Audio, Speech and Language Processing, 20(6):1759-1770, Aug. 2012.

The plug-in is available online for free download (non-commercial purposes). A slightly less formal description of the algorithm, including graphs and audio examples is provided by the author Justin Salamon and can be found here.

In addition to benchmarking new algorithms against MELODIA, we hope it will serve the research community for research problems which could benefit from a predominant F0 estimator (e.g. query by humming, version identification, motif discovery and analysis, automatic transcription, source separation, etc.). We are very interested in receiving feedback from the research community, please let us know what you think!

Hope you enjoy the MELODIA experience!

5 Oct 2012 - 17:40 | view
Voctro Labs collaborates in the new "El Plan B de Ballantine's"

Voctro Labs, spinoff of the MTG, provides singing voice synthesis technology for the campaign "El Plan B de Ballantine's". The popular band "La Oreja de Van Gogh" composed a new lyric-less song (just music and melody) and they are inviting all their fans to participate in the creative process of composing the lyrics for this song. Fans can listen to the new song and compose the lyrics on the campaign's web site. Voctro Labs created a new female Vocaloid voice for this project, which permits fans to figure out how it would sound if it was sung by a real singer.

The PlanB web site is online since October 1st, drawing the attention of the media and Vocaloid fans worldwide. Also, at the end of the year, La Oreja de Van Gogh will choose their favorite lyrics sent by the fans and will use them for the final version of the song.

5 Oct 2012 - 11:49 | view
Seminar by Gautham Mysore on Non-negative Hidden Markov Modeling of Audio

When and where? Thursday, Oct 4, 2012, 3:30pm, 52.321

Host: Xavier Serra (MTG)

Title: Non-negative Hidden Markov Modeling of Audio

Abstract:
Non-negative spectrogram factorization techniques have become quite popular in the last decade as they are effective in modeling the spectral structure of audio. They have been extensively used for applications such as source separation and denoising. These techniques however fail to account for non-stationarity and temporal dynamics, which are two important properties of audio. In this talk, I will introduce the non-negative hidden Markov model (N-HMM) and the non-negative factorial hidden Markov model (N-FHMM) to model single sound sources and sound mixtures respectively. They jointly model the spectral structure and temporal dynamics of sound sources, while accounting for non-stationarity. I will also discuss the application of these models to various applications such as source separation, denoising, and content based audio processing, showing why they yield improved performance when compared to non-negative spectrogram factorization techniques.

2 Oct 2012 - 16:49 | view
Big participation of MTG researchers at ISMIR 2012

12 papers discussing research done at the MTG are being presented at the 13th International Society for Music Information
Retrieval Conference, that takes place in Porto from October 8th to the 12th 2012. These are:

 

2 Oct 2012 - 09:07 | view
intranet