News and Events

Software developer position at the MTG-UPF
This position will involve working together with researchers at the MTG-UPF in Barcelona to develop and maintain web-based applications related to sound and music. It relates to a number of projects of the MTG that include large repositories of sounds with a user community around them (like or
Starting date: inmediate
Duration: 12 months with option to renew
Required skills/qualifications:

Bachelor degree in Computer Science or similar educational qualification

Proficiency in both written and spoken English
Proficiency in Python and C/C++
Experience with at least one python based web framework (such as Django or Flask)

Experience in developing API using various languages
Familiarity with concepts of audio signal processing and machine learning
Experience in working with databases and large datasets
Demonstrated ability to write maintainable, well-documented software and documentation

Preferred skills/experience:

Working experience with source control systems, unit testing
Experience in system administration tasks
Passion for music and audio

Participation in open source software projects


The Music Technology Group of the Universitat Pompeu Fabra is a leading research group with more than 40 researchers, carrying out research on topics such as audio signal processing, sound and music description, musical interfaces, sound and music communities, and performance modeling. The MTG wants to contribute to the improvement of the information and communication technologies related to sound and music, carrying out competitive research at the international level and at the same time transferring its results to society. To that goal, the MTG aims at finding a balance between basic and applied research while promoting interdisciplinary approaches that incorporate knowledge from both scientific/technological and humanistic/artistic disciplines.


Interested people should send a resume as well as a motivation letter to mtg [at] upf [dot] edu (subject: Junior%20software%20developer)
19 Dec 2014 - 17:34 | view
Phonos/MTG Exhibit at the Museu de la Música in Barcelona

To celebrate 40 years of Phonos (20 of which are also part of the history of the MTG) we have organized an Exhibit at the Museu de la Música in Barcelona. Through the presentation of 10 electronic instruments that were either developed or used at Phonos/MTG we review our history in the field of electronic music, going over some of the people that made this history and presenting some of the music that resulted from it.

Phonos was created in 1974 by Andrés Lewin-Richter, Josep Maria Mestres Quadreny and Lluís Callejo as an electronic music laboratory and in 1994 Phonos became part of the Universitat Pompeu Fabra, resulting in the creation of Music Technology Group (MTG). Very interesting things have happened in these 40 years and relevant contributions have been made to the world of Electronic Music. To learn a bit about this you will have to go to the Museu de la Música in Barcelona. 

The Exhibition opens on December 18th 2014 and will remain open until September 27th 2015.

16 Dec 2014 - 15:21 | view
Open PhD positions at MTG-UPF
4 Dec 2014 - 15 Jan 2015

The MTG of the Universitat Pompeu Fabra in Barcelona, is opening 3 funded PhD positions to work within some of its research projects, with a starting date of September 2015. The candidates have to apply to the PhD program of the Department of Information and Communication Technologies ( They have to demonstrate an academic and research background of relevance for the project context proposed and they have to submit, and have approved by the MTG before making the formal application, a research proposal on a specific topic within the project context.

The projects for which we offer the funded positions are related to the automatic description of large sound and music collections, collections like or The research to be carried out should combine audio signal processing techniques for content analysis of the audio recordings with semantic web technologies for analyzing the contextual information related to the recordings. The candidates should be competent in fields such as audio signal processing, machine learning, statistical analysis, and/or natural language processing. The candidates should be able to work with collections including several milion audio recordings plus their contextual information, thus developing algorithms that can scale to these sizes.

Projects being carried out at the MTG that relate to these positions include:, and

More practical and administrative information about the PhD program is available in

Interested people should send a CV, a motivation letter, and a first draft of a thesis proposal to mtg [at] upf [dot] edu before January 15th 2015.



4 Dec 2014 - 17:51 | view
Application open for the Master in Sound and Music Computing
3 Dec 2014 - 26 Jun 2015

The application for the Master in Sound and Music Computing, program 2015-2016, is open on-line. There are 4 application periods (deadlines: January 23rd, March 13th, May 15th, June 26th). For more information on the UPF master programs and specifically on the SMC Master check here.

3 Dec 2014 - 16:02 | view
Starting new projects in 2015

We are happy to share with you that we are about to start some amazing projects in 2015. The list of projects is as follows:

RAPID-MIX (PI Sergi Jordà) is an Innovation Action funded by the European Commission that brings together 3 leading research institutions with 4 dynamic creative industries SMEs and 1 leading wearable technology SME in a consortium to deliver to market innovative multimodal interfaces for music, gaming, and e-Health applications

The MusicBricks project (PI Xavier Serra), funded by the European Commission, aims to exploit the creative and commercial possibilities of music technologies by piloting innovative musical tools with the new generation of SME digital makers and content creators, and leverages the state-of-the-art European research, by providing a compendium of physical, virtual and programming interfaces, thus allowing creative developers easy access to the core building blocks of music.

Music eLearning project funded by the Spanish Ministry of Economy and Competitiveness lead by the Universitat d'Alacant (PI Jose Manuel Iñesta Quereda) and Rafael Ramírez from our side.

TIMUL aims to investigate and explore all the relevant aspects in order to produce methods and tools for music education with innovative pedagogical paradigms, taking into account key factors such as expressivity, interactivity, gesture control, and cooperative work among participants.

1 Dec 2014 - 18:42 | view
Marco Marchini defends his PhD thesis on November 27th
27 Nov 2014

Marco Marchini defends his PhD thesis entitled "Analysis of Ensemble Expressive Performance in String Quartets: a Statistical and Machine Learning Approach" on Thursday November 27th 2014 at 12:00h in room 55.309 of the Communication Campus of the UPF.

The jury of the defense is: Xavier Serra (UPF), Josep Lluís Arcos (IIIA-CSIC), Roberto Bresin (KTH).

Abstract: Computational approaches for modeling expressive music performance have produced systems that emulate human expression, but few steps have been taken in the domain of ensemble performance. Polyphonic expression and inter-dependence among voices are intrinsic features of ensemble performance and need to be incorporated at the very core of the models. For this reason, we proposed a novel methodology for building computational models of ensemble expressive performance by introducing inter-voice contextual attributes (extracted from ensemble scores) and building separate models of each individual performer in the ensemble. We focused our study on string quartets and recorded a corpus of performances both in ensemble and solo conditions employing multi-track recording and bowing motion acquisition techniques. From the acquired data we extracted bowed-instrumentspecic expression parameters performed by each musician. As a preliminary step, we investigated over the dierence between solo and ensemble from a statistical point of view and show that the introduced inter-voice contextual attributes and extracted expression are statistically sound. In a further step, we build models of expression by training machine-learning algorithms on the collected data. As a result, the introduced inter-voice contextual attributes improved the prediction of the expression parameters.
Furthermore, results on attribute selection show that the models trained on ensemble recordings took more advantage of inter-voice contextual attributes than those trained on solo recordings. The obtained results show that the introduced methodology can have applications in the analysis of collaboration among musicians.

21 Nov 2014 - 09:50 | view
We are launching the AcousticBrainz project today!

The AcousticBrainz project aims to crowd source acoustic information for all music in the world and to make it available to the public. This acoustic information describes the acoustic characteristics of music and includes low-level spectral information and additional high level descriptors for genres, moods, keys, scales and much more. The goal of AcousticBrainz is to provide music technology researchers and open source hackers a massive database of information about music. We hope that this database will spur the development of new music technology research and allow music hackers to create new and interesting recommendation and music discovery engines.

AcousticBrainz is a joint effort between the MusicBrainz project and our research group, being originally envisioned by Xavier Serra. At the heart of this project lies the Essentia software library, our open source toolkit that allows the automatic analysis of music. The output from Essentia is collected by the AcousticBrainz project and made available to the public under CC0 license (public domain). In 6 weeks since its inception, AcousticBrainz contributors have already submitted data for 650,000 audio tracks using the pre-release software. Today, AcousticBrainz also releases a music analysis tool for Mac, Linux and Windows to the general public, which will allow many more people to contribute data from their personal music collections.

This new project is being presented today by Robert Kaye, founder of MusicBrainz and Executive Director of the MetaBrainz Foundation, at the Music 4.5 Open Data in which a session is devoted to discuss the value of music content and music related data and how open data could be a catalyst for monetising music content. AcousticBrainz will also be presented at Strata Conference tomorrow in a session titled 'Disrupting the Music Tech Space with Open Data'.

19 Nov 2014 - 09:03 | view
7th International Workshop on Machine Learning and Music
28 Nov 2014

With the current explosion and quick expansion of music in digital formats and the computational power of modern systems research on machine learning and music is gaining increasing popularity. As complexity of the problems investigated by
researchers on machine learning and music increases, there is a need to develop new algorithms and methods to solve these problems. The focus of the 7th International Workshop on Machine Learning and Music (MML14) is on novel methods which take into account or benefit from musical structure.

This workshop will take place at Universitat Pompeu Fabra and it is organised by Rafael Ramírez (Universitat Pompeu Fabra), Darrell Conklin (University of the Basque Country) and José Manuel Iñesta (University of Alicante).

17 Nov 2014 - 17:42 | view
Tutorial on Beijing Opera and computational tools for its analysis
18 Nov 2014 - 20 Nov 2014

This is a 3 hours tutorial that we gave at ISMIR in Taipei and that now we are doing it again here:

Jingju music: concepts and computational tools for its analysis
Xavier Serra, Rafael Caro Repetto, Sankalp Gulati, Ajay Srinivasamurthy
Tuesday, Nov 18, & Thursday, Nov 20, 10:00am-12:00pm, Room 55.230

Abstract: Jingju (also known as Peking or Beijing opera) is one of the most representative genres of Chinese traditional music. From an MIR perspective, jingju music offers interesting research topics that challenge current MIR tools. The singing/acting characters in jingju are classified into predefined role-type categories with characteristic singing styles. Their singing is accompanied by a small instrumental ensemble, within which a high pitched fiddle, the jinghu, is the most prominent instrument within the characteristic heterophonic texture. The melodic conventions that form jingju modal systems, known as shengqiang, and the percussion patterns that signal important structural points in the performance offer interesting research questions. Also the overall rhythmic organization into pre-defined metrical patterns known as banshi makes tempo tracking and rhythmic analysis a challenging problem. Being Chinese a tonal language, the intelligibility of the text would require the expression of tonal categories in the melody, what offers an appealing scenario for the research of lyrics-melody relationship. The role of the performer as a core agent of the music creativity gives jingju music a notable space for improvisation. The lyrics and scores cannot be taken as authoritative sources, but as transcriptions of particular performances.

In this tutorial we will give an overview of Jingju music, of the relevant problems that can be studied from an MIR perspective and of the use of specific computational tools for its analysis. The tutorial will be organized in three parts. The first will be an introduction to Jingju from a musicological perspective, the second will cover diverse audio analysis tools of relevance to the study of Jingju (using, and finally in the last part we will present and discuss specific examples of analyzing Jingju arias using those tools (work done in the context of

Tuesday, Nov 18, 10:00am-12:00pm, Room 55.230
1. Presentation (Xavier Serra)
2. Introduction to jingju music (Rafael Caro)
3. Computational framework (Xavier Serra)
4. Research problems (Xavier Serra, Rafael Caro)

Thursday, Nov 20, 10:00am-12:00pm, Room 55.230
5. Computational tools for melodic description of jingju music (Sankalp Gulati)
6. Computational tools for rhythm analysis of jingju music (Ajay Srinivasamurthy)
7. Conclusions (Xavier Serra)

13 Nov 2014 - 09:51 | view
Participation to the "Atles de la Innovacio a Catalunya"

Jordi Bonada participates in the public presentation of the "Atles de la Innovació a Catalunya" at Fàbrica Moritz on Novembre 13th 2014. This is an event organized by the Plataforma Coneixement, Territori i Innovació that showcases several collaborations between universities and the industry. In particular, he will introduce the collaboration between UPF and Yamaha Corp in the context singing voice synthesis.

13 Nov 2014 - 09:01 | view