News and Events

PhD fellowship on Music Information Retrieval at MTG

The Music Technology Group (MTG) of the Department of Information and Communication Technologies, Universitat Pompeu Fabra in Barcelona is opening a PhD fellowship in the area of Music Information Retrieval to start in the Fall of 2016.

Application closing date: 05/05/2016

Start date: 01/10/2016

Research lab:  Music Information Research lab, Music Technology Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra
Supervisor: Emilia Gómez

Duration: 3+1 years

Topics: automatic transcription, sound source separation, music classification, singing voice processing, melody extraction, music synchronization, classical music, computational ethnomusicology.

Requirements: Candidates must have a good Master Degree in Computer Science, Electronic Engineering, Physics or Mathematics. Candidates must be confident in some of these areas: signal processing, information retrieval, machine learning, have excellent programming skills, be fluent in English and possess good communication skills. Musical knowledge would be an advantage, as would previous experience in research and a track record of publications.

More information on grant details:
Provisional starting date: October 1st 2016

Application: Interested candidates should send a motivation letter, a CV (preferably with references), and academic transcripts to Prof. Emilia Gómez (emilia [dot] gomez [at] upf [dot] edu) before May 5th 2016. Please include in the subject [PhD MIR].



15 Apr 2016 - 14:50 | view
PhD Studentship in Technology Enhanced Learning of Music Instruments
PhD Studentship in Technology Enhanced Learning of Music Instruments
Application closing date: 22/04/2016
Start date: 01/09/2016
Research group: Music Technology Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra
Duration: 3 years Years Funding available

Applications are invited for two fully funded PhD studentships at the Music Group, Universitat Pompeu Fabra, Barcelona, Spain undertaking research into Technology Enhanced Learning of Music Instruments.

TELMI is a joint project of 3 academic and 2 industry partners: Universitat Pompeu Fabra, Spain; Royal College of Music, UK; University of Genova, Italy; HIGHSKILLZ, UK; SAICO INTELLIGENCE, S.L. Spain. The aim of the project is to study how we learn musical instruments, taking the violin as a case study, from a pedagogical and scientific perspective and to create new interactive, assistive, self-learning, augmented-feedback, and social-aware systems complementary to traditional teaching. As a result of a tightly coupled interaction between technical and pedagogical partners, the project will attempt to answer questions such as “How will the musical instrument learning environments be in 5-10 years time?” and “What impact will these new musical environments have in instrument learning as a whole?”. More information a bout the project can be found at

The student will be a member of the Music Technology Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra and will be supervised by Dr. Rafael Ramirez and Dr. Alfonso Perez-Carrillo. The successful candidate will pursue research at the intersection of audio and video signal processing, machine learning and cognitive sciences in the context of music performance pedagogy. The work will involve the development of multimodal signal processing algorithms, design of augmented visual feedback systems and the development of non-intrusive low-cost sensing systems for violin learning/teaching.

Candidates must have a good Master Degree in Computer Science, Electronic Engineering, Physics or Mathematics. Candidates must be confident in signal processing, have excellent programming skills, be fluent in English and possess good communication skills. Experience in machine learning and music performance would be an advantage, as would previous experience in research and a track record of publications. Interested candidates should apply by sending a full CV and a letter of interest to Dr. Alfonso Perez and Dr. Rafael Ramirez. Informal enquiries can be made by email to Dr. Alfonso Perez-Carrillo (alfonso [dot] perez [at] upf [dot] edu, and Dr. Rafael Ramirez (rafael [dot] ramirez [at] upf [dot] edu,
5 Apr 2016 - 18:26 | view
CANTE: Open Algorithm, Code & Data for the Automatic Transcription of Flamenco Singing

The MTG has published CANTE: an Open Algorithm, Code & Data for the Automatic Transcription of Flamenco Singing.

The proposed system outperforms state of the art singing transcription systems with respect to voicing accuracy, onset detection, and overall performance when evaluated on flamenco singing datasets. We hope it think will be a contribution not only to flamenco research but to other singing styles.

You can read about our algorithm at the paper we published at IEEE TASP, where we present the method, strategies for evaluation and comparison with state of the art approaches. You can not only read, but actually try it, as we published an open source software for the algorithm, plus a music dataset for its comparative evaluation, cante100 (I will talk about flamenco corpus in another post). All of this to foster research reproducibility and motivate people to work on flamenco music.


5 Apr 2016 - 10:20 | view
Announcing the Sónar Innovation Challenge
The MTG will not organise the Barcelona Music Hack Day this year, instead we are starting a new initiative in collaboration with the Sónar Festival: the Sónar Innovation Challenge (SIC).
The Sónar Innovation Challenge is in the same line than the MHD but a bit different. SIC is a platform for the collaboration between tech companies and creators (programmers, designers, artists...) that aims to produce innovative prototypes that will be showcased in Sonar+D. 
At SIC tech companies propose challenges for the creative community based on concrete needs for innovation. Creative coders, artists and designers who sign up for a challenge will have, after a selection process, the opportunity to work in a prototype within a unique collaborative environment, together with other challengers and company mentors.
4 Apr 2016 - 09:41 | view
Key Estimation in Electronic Dance Music, MTG presentation at ECIR 2016

This week the MTG is presenting some work at an oral session at the 38th European Conference on Information Retrieval (ECIR 2016), in Padua (IT), 20-23 March 2016.

Ángel Faraldo is presenting a paper titled "Key Estimation in Electronic Dance Music" written together with Emilia Gómez, Sergi Jordà and Perfecto Herrera. At such, it will be published in the Conference proceedings by Springer-Verlag.

21 Mar 2016 - 13:24 | view
Concert and demos: Phenicx project, classical music for the XXI century

Concert and demos to show the technologies developed as part of the Phenicx project on Wednesday 30th March 2016 at 19h in Arts Santa Mònica (Claustre), La Rambla 7, Barcelona.

The classical music approaches to new technologies through the project Phenicx, which proposes new ways to enjoy the experience of live music through innovative systems that allow to view and listen to a concert in a personalized way depending on the interests of the viewer.

After three years of international research, we invite the audience of Barcelona to discover the technologies developed in the frame of the PHENICX project with a special event that will include different activities:

  • Live demonstration of technology and classical music performance: three live music demos at the piano of ± 10 mins each. The audience will be introduced to various ways in which music practice can be enriched and facilitated thanks to novel PHENICX technologies. More specifically, the demos will consider three themes (exact time schedule to be published close to the event):
    • Discovering new aspects about music: in the 19th century, virtuoso pianists traveled Europe playing transcriptions of important works, so they got discovered by a broader audience. Using a piano transcription of Beethoven's 'The Creatures of Prometheus', we will now have you discover various musical layers and dimensions of this orchestral piece.
    • Music structure: ever wondered how musical themes connect to form a larger structure? We will unravel this and show you live how a piece develops the way it does.
    • Anytime performance tracking: fed up with carrying around heavy books of sheet music and having to turn pages at inconvenient moments? We will show you how state-of-the-art performance tracking technology can enable live score following, wherever you are in a piece---and literally bring a full music library at your fingertips this way.
  • A space for demos: discover the various components of our integrated prototype, offering an enriched experience of a concert. For example, listen to the different instruments of the orchestra, watch a live scrolling score, and browse symphonic work by considering its structural components.
  • Becoming the maestro: an interactive installation to simulate the role of a conductor and to understand the control of the different instruments and their interpretations through gestures.

In collaboration with

  • Multimedia Computing Group (MMC) - Technische Universiteit Delft
  • Department of Computational Perception (CP) - Johannes Kepler Universität Linz
  • Royal Concertgebouw Orchestra
  • Video Dock BV
  • Austrian Research Institute for Artificial Intelligence
  • Escola Superior de Música de Catalunya (ESMUC)

With the support of

European Commission, FP7 (Seventh Framework Programme)




14 Mar 2016 - 17:21 | view
Junior software developer position at the MTG-UPF

This position will involve working together with researchers at the MTG-UPF in Barcelona to develop and maintain web-based applications related to sound and music. It relates to a number of projects of the MTG that include large repositories of sounds with a user community around them (like or

Starting date: immediate
Duration: 12 months with option to renew

Required skills/qualifications:

  • Bachelor degree in Computer Science or similar educational qualification
  • Proficiency in both written and spoken English
  • Proficiency in Python and Javascript
  • Experience with at least one python based web framework (such as Django or Flask)
  • Experience in working with databases and large datasets
  • Demonstrated ability to write maintainable, well-documented software and documentation

 Preferred skills/experience:

  • Working experience with source control systems, unit testing
  • Experience in system administration tasks
  • Experience in C/C++
  • Familiarity with concepts of audio signal processing and machine learning
  • Passion for music and audio
  • Participation in open source software projects


The Music Technology Group of the Universitat Pompeu Fabra is a leading research group with more than 40 researchers, carrying out research on topics such as audio signal processing, sound and music description, musical interfaces, sound and music communities, and performance modeling. The MTG wants to contribute to the improvement of the information and communication technologies related to sound and music, carrying out competitive research at the international level and at the same time transferring its results to society. To that goal, the MTG aims at finding a balance between basic and applied research while promoting interdisciplinary approaches that incorporate knowledge from both scientific/technological and humanistic/artistic disciplines. For more information on MTG-UPF please visit


Interested people should send a resume as well as an introduction letter to mtg-info [at] upf [dot] edu (subject: Junior%20software%20developer%20position)

23 Feb 2016 - 14:45 | view
Three presentations of AudioCommons in London
The first article of the AudioCommons project will be presented on Thursday February 11th at the AES 61st Conference on Audio for Games in London. The paper is entitled “Audio Commons: bringing Creative Commons audio content to the creative industries” and describes the main ideas and core concepts of the Audio Commons initiative and the Audio Commons Ecosystem.
On the same week there will be one public presentation of the AudioCommons initiative on Monday February 8th at University of Surrey (link) and another presentation on Tuesday February 9th at Queen Mary University of London (link).
All the presentations will be given by Xavier Serra and Frederic Font.
5 Feb 2016 - 19:32 | view
Open presentation of TELMI project
8 Feb 2016

Monday February 8th 2016 from 3pm to 6pm in room 52.019 of the Communication Campus of the UPF (Roc Boronat 138, Barcelona)

Technology Enhanced Learning of Musical Instrument Performance (TELMI) is a project funded by the European Commission that aims to study how we learn musical instruments, taking the violin as a case study, from a pedagogical and scientific perspective and to create new interactive, assistive, self-learning, augmented-feedback, and social-aware systems complementary to traditional teaching.

As part of the Kick-off meeting of the project, we offer an open session where each of the partners of the project will introduce what they are doing and will do in relation to TELMI.

15:00h-15:30h: Universitat Pompeu Fabra
The Music Technology Group (MTG), UPF, is an international reference in audio processing technologies and their musical applications. The MTG is the coordinator of the TELMI project and will develop systems for the acquisition of multi-modal (audio, motion, and biological signals) data from musical performances, as well as associated algorithms for the analysis of these data.

15:30h-16:00h: University of Genova
Recognizing the importance of cross-fertilization between artistic and scientific knowledge, researchers at Casa Paganini-InfoMus (University of Genova) aim to combine scientific research in information and communications technology (ICT) with artistic and humanistic research. Scientific and technological research focuses on human-centred computing with emphasis on real-time analysis of nonverbal expressive and social behaviour. In TELMI, Casa Paganini – InfoMus will mainly work on data acquisition, on algorithms and software modules for automated measure of expressive gestures and social signals (e.g., entrainment and leadership), and on integration of the project platform and prototypes.

16:00h-16:30h: Royal College of Music
The Royal College of Music, London, trains talented musicians from all over the world for international careers as performers, conductors and composers. The RCM Centre for Performance Science conducts empirical research into how musicians learn and perform. As a partner of the TELMI project, the RCM team will set out fundamental skills involved in violin performance and determine how the new technologies will define success in each. We will also put these technologies into the hands of expert violinists and their students, evaluating how they change the way they learn, teach and perform.

16:30h-17:00h: Coffee break

17:00h-17:30h: HighSkillz
HighSkillz specializes in developing tailored game-based learning solutions that lead to faster competence development and more effective knowledge acquisition. The company is staffed by an experienced team and collaborates closely with subject matter experts to achieve the targeted learning outcomes, monitoring observable behaviours and cross-reference them against the target learning outcomes. In the TELMI project, HighSkillz will leverage its expertise in personalised and adaptive environments to support the creation of an enhanced learning platform. The focus of the design will be on the use of game-based learning solutions and gamification to support the competence of students in mastering the complexity of playing a musical instrument, thus supporting the role of the teacher and empowering the student in the periods of self-study.

17:30h-18:00h: SAICO
SAICO Intelligence will present the R&D2Value Methodology, which will be applied to the TELMI Project for dissemination and exploitation tasks. This methodology has the objective to transfer the technology developed in the project to the market through a number of actions to be taken during the R&D Phase and the Market Launch Phase. The presentation will show the benefits of, first, orienting R&D to market from the first minute of a R&D project, and second, doing a good execution of the Market Launch.

4 Feb 2016 - 11:15 | view
Seminar by Bob Sturm on the evaluation of music content analysis systems
4 Feb 2016

Bob Sturm, from Queen Mary - University of London, will give a talk on “The scientific evaluation of music content analysis systems: Toward valid empirical foundations for future real-world impact" on Thursday February 4th 2016 at 4:30pm in room 52.123 of the Communication Campus of the UPF.

Abstract: Music content analysis research aims to meet at least three goals: 1) connect users with music and information about music; 2) help users make music and information about music; and 3) help researchers develop content analysis technologies. Standard empirical practices used in this discipline, however, have serious problems (as noted in the MIReS 2013 Roadmap, and [2-5]). I present three case studies that exemplify these problems, and discuss them within a design of experiments framework. I argue that problems with MIR evaluation cannot be satisfactorily addressed until the practice installs the formal design of experiments [1]. I also propose new ways to think about what we do, which is very preliminary work.
[1] R. A. Bailey, Design of comparative experiments. Cambridge University Press, 2008.
[2] G. Peeters, J. Urbano, and G. J. F. Jones, “Notes from the ISMIR 2012 late-breaking session on evaluation in music information retrieval,” in Proc. ISMIR, 2012.
[3] B. L. Sturm, “Classification accuracy is not enough: On the evaluation of music genre recognition systems,” J. Intell. Info. Systems, vol. 41, no. 3, pp. 371–406, 2013.
[4] B. L. Sturm, “The state of the art ten years after a state of the art: Future research in music information retrieval,” J. New Music Research, vol. 43, no. 2, pp. 147–172, 2014.
[5] J. Urbano, M. Schedl, and X. Serra, “Evaluation in music information retrieval,” J. Intell. Info. Systems, vol. 41, pp. 345–369, Dec. 2013.
1 Feb 2016 - 22:17 | view