News and Events

Seminar by Bill Verplank on sketching metaphors

Bill Verplank, from CCRMA, will give a seminar on "Sketching Metaphors" on Thursday, February 7th, at 3:30pm in room 52.321.

In this seminar I will describe (sketch) some metaphors I have used to provide a framework for Interaction Design - examples will be drawn from the course that Max Mathews and I developed at CCRMA on designing music controllers (NIME).

About Bill Verplank:
Bill Verplank is a human factors engineer and designer educated in ME at Stanford and MIT. After four years teaching design at Stanford he spent 22 years in industry: at Xerox (user-interface) IDEO (product design) and Interval Research (haptics). He has been active as a visiting lecturer at Stanford (ME, CS, CCRMA), ID/IIT, TU/e, IDII, CIID and professionally in ACM: SIGCHI, DIS, TEI, NIME.

4 Feb 2013 - 18:15 | view
Web Interface Designer job at the MTG-UPF

At the MTG-UPF and in the context of the CompMusic project we are looking for a Web Interface Designer to be involved in the development of a system to browse and interact with audio collections. The system is an online web application that interfaces with musical data (audio, scores, editorial information) plus musical descriptions that are automatically obtained from the data.

The Web Designer will be responsible for the graphical and functional design elements of the system, creating and implementing attractive and effective website designs that provide the end user with an engaging experience.

Given that the work will involve many meetings and discussions with the researchers at the UPF, the candidate should live in the Barcelona area.

Required skills:

  • Experience in web and interface design, graphic design, web development, user interface design and user experience.
  • Have an innovative design approach to navigation and search of audiovisual media.
  • Experience in graphic development tools such as Photoshop, Illustrator or similars.
  • Software development skills using HTML/CSS/JS (recommended HTML5 and CSS3).
  • Proficiency in English.

Interested candidates should send a CV and examples of work done related to this job to Xavier Serra (xavier [dot] serra [at] upf [dot] edu (subject: Web%20Designer%20job) (email)).

25 Jan 2013 - 19:28 | view
Seminar by Geoffroy Peeters on annotating MIR corpora

Geoffroy Peeters, from IRCAM, will give a seminar on "Annotated MIR Corpora, MSSE search engine for music, Perceptual Tempo" on Thursday, January 24th, at 3:30pm in room 52.321.

In this talk I will focus on three recent topics studied at IRCAM.
The first concerns a proposal for the description of annotated MIR corpora. Considering that today, annotated MIR corpora are provided by various research labs or companies, each one using its own annotation methodology, concept definitions, and formats, it is essential to define precisely how annotations are supplied and described. We propose here a proposals for the axis against which corpora can be described.
The second concerns our experience in integrating music indexing technologies in a third-party search and navigation engine (Orange MSSE search engine). We explain the work performed for this in terms of – choice of the technology, - development of annotated corpora for training the systems, - HIM development, user tests.
The third concerns the estimation of perceptual tempo and the reduction of the so-called octave errors of tempo estimation algorithms. Using the data from Last-FM perceptual experiment, we model the relationship between a set of four audio features to the perceptual tempo using a GMM Regression technique. We show that this technique allows outperforming current tempo estimation algorithms.
  • G. Peeters and K. Fort. "Towards a (better) definition of the description of annotated m.i.r. corpora," In Proc. of ISMIR, Porto, Portugal, October 2012.
  • G. Peeters, F. Cornu, D. Tardieu, C. Charbuillet, J. J. Burred, M. Ramona, M. Vian, V. Botherel, J.-B. Rault, and J.-P. Cabanal. "A multimedia search and navigation prototype, including music and video-clips," In Proc. of ISMIR, Porto, Portugal, October 2012.
  • G. Peeters and J. Flocon-Cholet. "Perceptual tempo estimation using gmm regression," In Proc. of ACM Multimedia/ MIRUM (Workshop on Music Information Retrieval with User-Centered and Multimodal Strategies), Nara, Japan, October 2012.
18 Jan 2013 - 14:21 | view
Music for Cochlear Implants concert
9 Feb 2013

Saturday February 9th, 2013 at 12PM

Auditori CAIXA FORUM (Barcelona)

Av. Francesc Guardia 6-8, Barcelona

Free admission


We invite you to participate in a unique experience in which researchers and musicians come together for the hearing impaired.

musIC is a project related with the research about music perception with cochlear implant devices, and aims to understand how these devices can be further developed. The cochlear implant is a medical implanted device that has been designed mainly to restore the perception of speech sounds, but still has many limitations in musical listening. With this objective we are organizing a concert specially designed considering the limitations when listening into music with cochlear implants. The concert is also intended for the general public.

We will hear pieces played with different instruments and formations: a string quartet and flute, soprano, piano, guitar and ReacTable, an interactive electronic instrument developed by the Music Technology Group.

With this concert will try to better understand how music is perceived. Attendees can contribute to research by participating in a survey about this musical experience.

Compositions: Civilotti Alejandro, Alejandro Fränkel, Sergio Naddei, Luis Nogueira.

Organizers: Music Technology Group (Universitat Pompeu Fabra) and Phonos Foundation. With the support of: Advanced Bionics.

More information: and



9 Jan 2013 - 18:56 | view
Seminar by Uri Nieto on Music structure analysis

Uri Nieto, from the Music and Audio Research Lab of NYU, will give a talk on "Music Structure Analysis using Matrix Factorization" on Thursday, January 10th, at 3:30pm in room 52.321.

Abstract: We propose a novel and fast approach to discover structure in western popular music by using a specific type of matrix factorization that adds a convex constrain to obtain a decomposition that can be interpreted as a set of weighted cluster centroids. We show that these centroids capture the different sections of a musical piece (e.g. verse, chorus) in a more consistent and efficient way than classic non-negative matrix factorization. This technique is capable of identifying the boundaries of the sections and then grouping them into different clusters.

Biography: Oriol Nieto is currently pursuing a Ph.D degree in Music Technology at the New York University, supervised by Morwaread Farbood and Juan Bello. He is a guitarist, violinist, composer, and music technologist. He received a B.S. in computer science from Polytechnic University of Catalonia (UPC), an M.S. in Information Technologies, Communication, and Audiovisual Media from Pompeu Fabra University (UPF), and an M.A. in Music, Science and Technology from Stanford University. His main interests are Music Information Retrieval, Music Cognition, Machine Learning, and Mobile Music.

7 Jan 2013 - 10:49 | view
PhD positions at the MTG

The MTG is opening 5 funded PhD positions to work within some of its research areas, with a starting date of September 2013. The candidates have to apply to the PhD program of the Department of Information and Communication Technologies. They have to demonstrate an academic and research background in the area they are applying for and have to submit a research proposal on a specific topic. The areas and topics for which we offer the funded positions are:

Sound and Music Communities (responsible faculty: Xavier Serra): Within the CompMusic project we have two open positions to work on computational approaches for the understanding of the relationship between lyrics and music in one or several of the following music traditions: Hindustani (North India), Carnatic (South India), Turkish-makam (Turkey), Andalusian (Maghreb) and Beijing Opera (China). In the context of Freesound we also have one open position to work on issues related to community profiling, linked data, automatic data structuring and sound ontologies.

Sound and Music Description (responsible faculty: Emilia Gomez): We have one open position to work on one of these two topics: (1) Computer-assisted transcription, similarity and classification of flamenco singing, using signal processing, machine learning and user modeling methodologies. Spanish funded project, SIGMUS. (2) Music & Autobiographical Memory, dealing with audio feature extraction, music recommendation and user preferences' modeling.

Musical and Advanced Interaction (responsible faculty: Sergi Jordà): We have one open position to work on tangible and tabletop interaction.

Before making the application the candidate needs the support of the faculty member responsible for the research area chosen. Interested people should first send a CV and a motivation letter to the faculty member identified.

3 Jan 2013 - 14:05 | view
Application open for the Master in Sound and Music Computing

The application for the Master in Sound and Music Computing, program 2012-2013, is open on-line. There are 4 application periods (deadlines: January 16th, March 14th, May 16th, June 28th). For more information on the UPF master programs and specifically on the SMC Master check here.

2 Jan 2013 - 19:29 | view
MTG-QBH: new dataset of sung melodies

As a little gift for the holiday season (be it Christmas, Hannukah, Tenno no Tanjobi or any other festivity you celebrate!), we're glad to announce the release of a new dataset: MTG-QBH.

The dataset includes 118 recordings of a cappella sung melody excerpts. The recordings were made as part of the experiments on Query-by-Humming (QBH) reported in:

J. Salamon, J. Serrà and E. Gómez, "Tonal Representations for Music Retrieval: From Version Identification to Query-by-Humming", International Journal of Multimedia Information Retrieval, special issue on Hybrid Music Information Retrieval, In Press (Nov. 2012).

In addition to the query recordings, three meta-data files are included, one describing the queries and two describing the music collections against which the queries were tested in the experiments described in the aforementioned article.

Whilst the query recordings are included in this dataset, audio files for the music collections listed in the meta-data files are not included in this datas

et, as they are protected by copyright law (sorry!). Nonetheless, all tracks are commercially available and we hope that those interested in using this dataset for QBH should be able to acquire them easily.

Further information about the queries, how they were recorded and by who is available on the dataset website, where you can of course download the audio and metadata files.

We hope that you find this dataset useful, whether for QBH or any other research topic (e.g. monophonic transcription), and would be very interested to receive your feedback.

13 Dec 2012 - 16:36 | view
New VST plug-in by Yamaha based on a previous joint research with the MTG

Yamaha Corporation has released through Steinberg Media Technologies a new VST plug-in known as 'sonote beat re:edit' based on the achievements of a research project in collaboration with the Music Technology Group.

On the top of the concept and know-how gained during the previous joint research with the MTG, Yamaha has fully conceptualised the product idea behind this novel application which is powered by Yamaha's proprietary technologies.

13 Dec 2012 - 15:03 | view
MELODIA downloaded over 250 times and HPCP reaches 100!

MELODIA, our melody extraction vamp plug-in by Justin Salamon reached its 250th download yesterday! Also, our recently released HPCP vamp plug-in by Emilia Gómez and Jordi Bonada has just reached 100 downloads!

Apart from obviously being excited about the interest in both plug-ins, we were also really surprised by the wide range of uses people have found for them. For MELODIA, in addition to the perhaps more expected research purposes (transcription, query-by-humming, computational musicology and ethnomusicology, music similarity, structure analysis, etc.), people have downloaded it for educational use in schools and universities, for music composition (for example for synthesizing natural sounding vibrato by using the pitch curve generated by a real singer, or for vocaloid compositions), for checking out the current state-of-the-art (including some commercial companies), and even just to "view music in a different way" and "for fun".

HPCP has also been downloaded for a variety of purposes including composition, analysis, education, alignment of different audio recordings, comparison of chroma-related features for retrieval in musical heritage collections, to analyze recordings of electronic music, to study song structure and even just to "Have fun with HPCP".

So... what next?

If you haven't already, you can try out MELODIA and HPCP for yourself:

5 Dec 2012 - 19:20 | view