News and Events

Participation to FMA 2017

Rong Gong and Hasan Sercan Atlı participate to the 7th International Workshop on Folk Music Analysis that will take place in Málaga (Spain) from June 14th to the 16th 2017. They will be presenting the following articles:


9 Jun 2017 - 09:45 | view
PhD position on data-driven methodologies for music knowledge extraction
In the context of a collaborative project between the Music Technology and the Natural Language Processing groups of the Department of Information and Communication Technologies (DTIC) at Universitat Pompeu Fabra (UPF) we offer a PhD position dedicated to developing data-driven methodologies for music knowledge extraction by combining Natural Language Processing and Music Information Retrieval approaches.
Supervisors of the position: Xavier Serra and Horacio Saggion
Contact for application:  Aurelio Ruiz (aurelio [dot] ruiz [at] upf [dot] edu (subject: PhD%20position%20on%20music%20knowledge%20extraction) )
The work to be done in this PhD will aim at processing music related text from open web sources in order to generate musically relevant knowledge. For this, it will require combining methodologies coming from Music Information Retrieval (MIR), Natural Language Processing (NLP) and Computational Musicology.
The PhD position is part of the María de Maeztu Strategic Research Program on data-driven knowledge extraction (MDM-2015-0502) and linked to the program of the Spanish Ministry of Science and Competitiveness “Ayudas para contratos predoctorales para la formación de doctores 2017”.
Deadline for application: Open application until a suitable candidate is found. The selected candidate will need to go through the admission process of the ICT PhD program at UPF.
DTIC-UPF María de Maeztu Unit of Excellence:
The Severo Ochoa and María de Maeztu program, organised by the Ministry of Economy and Competitiveness, aims at “recognizing existing centres and units in Spain that perform cutting-edge research and are among the world's best in their respective areas”. DTIC-UPF is the only university department in the ICT field awarded this distinction.
The Strategic Research Program associated to the distinction is focused on data-driven knowledge extraction, boosting synergistic research initiatives across our different research areas: (1) cognitive and intelligent systems, (2) audiovisual technologies, (3) networks and communications, and (4) computational biomedicine.
Eligibility criteria:
Applicants should meet the requirements for “contratos predoctorales” call 2017 (MINECO) (details pending to be published). At the time of submission the candidate:
  • Should not have a PhD.
  • Should not have been granted any other PhD scholarship with a duration of 12 months or more.
  • Should not have received funding from the Spanish Programme for Research with the aim to pursue his/her PhD studies.
  • Should be a holder of a degree which would formally entitle him/her to embark on a doctorate for the academic course 2017-2018 (Msc Degree).
  • Should be in the position to be legally accepted in the ICT PhD program at UPF.
Conditions and application:
Contract terms are detailed in the Call for “contratos predoctorales” 2017 (MINECO). As it has not been published yet, the details outlined below are drawn from the 2016 call (note that quantities may change slightly):
  • 4-year fellowship with an annual evaluation.
  • It includes a stipend of up to 4.750€ per fellow to undertake a research visit to another research center during the duration of the fellowship.
  • Grant of up to 1.500€ to cover the PhD enrolment fees.
Application procedure:
Send the application by email to Aurelio Ruiz (aurelio [dot] ruiz [at] upf [dot] edu (subject: PhD%20position) ), including:
  • Cover letter with a proposal of a possible thesis to be developed (max 2 pages).
  • CV.
  • Names, email addresses and telephone numbers of two or three referees.
This is an internal pre-selection process of the best candidate. Then, the pre-selected student, with the support of the DTIC-UPF, should apply to the UPF PhD program and to the MINECO grant. 
5 Jun 2017 - 16:57 | view
Jordi Pons receives an AI Grant

Jordi Pons, PhD student at the MTG, has been awarded one of the AI Grants given by Nat Friedman. As part of the award, Jordi will get $5,000 to support his work on creating a dataset of sounds from Freesound and using it in his research.

The AI grants are an initiative of Nat Friedman, Cofounder/CEO of Xamarin, to support open-source AI projects. It aims to support any project in AI, large or small, as long as the software and data are released with an open license.

The project proposed by Jordi is part of an initiative of the MTG to promote the use of for research. The goal is to create a large dataset of sounds, following the same principles of Imagenet, that can be used for machine learning research projects. The project will contribute to develop an infrastructure with which to organize a crowdsourcing effort to convert Freesound into a research dataset.
For the grant application, Jordi presented the following video:
30 May 2017 - 10:04 | view
PhD defense by Álvaro Sarasúa and seminars by jury members Frederic Bevilacqua and Maarten Grachten

Date: Monday May 29th

Location: Universitat Pompeu Fabra, Tanger building, room 55.309.


11:00 PhD defense of Álvaro Sarasúa Berodia, Musical Interaction Based on the Conductor Metaphor.  

         Supervised by Emilia Gómez and Enric Guaus in the context of the phenicx project and in a joint collaboration by Musich Technology Group and Sonology Department, Escola Superior de Música de Catalunya

         Jury members: Frederic Bevilacqua (IRCAM), Sergi Jordà (Universitat Pompeu Fabra), Maarten Gratchen (Johannes Kepler University)

15:30 Invited seminars 

Frederic Bevilacqua, Movement Sound Interaction: from creative applications to rehabilitation

I will present an overview of the research we have been conducting at IRCAM on gesture capture and analysis. We have been collaborating with various composers, performers, which allows us to develop important concept and paradigms for the development of musical interactive systems. For examples, we have developed various augmented instruments by adding motion-capture systems to acoustic instruments such the violin. This allows us to study instrumental gestures and develop software for following/recognising gestures. We have also developed specific tangible interfaces such as the MO - Modular Musical Objects, or more recently the RIoT that allows for interacting with digital sound environments. Finally, we will present some recent studies and applications related to sensori-motor learning and embodied music cognition.

Bio: Frédéric Bevilacqua is the head of the Sound Music Movement Interaction team at IRCAM in Paris His research concerns the modeling and the design of interaction between movement and sound, and the development of gesture-based interactive systems.
Maarten Grachten, Basis models of musical expression for creating and explaining music performances

Expression in music performance is an important aspect of score-based music traditions such as Western classical music: Music performed by skilled musicians can be captivating as much as an improper performance can put listeners off. Computational modeling of expression in music performance is a challenging and ongoing effort, aiming both at a better understanding of the underlying principles, and at novel applications in music technology. In this talk, we will present a recently proposed modeling framework for musical expression, utilizing basis-function representations of score information. We show how it can be used for predictive modeling---to generate an expressive performance of a musical score---as well as for explanatory purposes. We illustrate this framework both in the context of solo piano music and in classical symphonic music.

Bio: Maarten Grachten holds a Ph.D. degree in computer science and digital communication (2006, Pompeu Fabra University, Spain). He is a former member of the Artificial Intelligence Research Institute (IIIA, Spain), the Music Technology Group (MTG, Spain), the Institute for Psychoacoustics and Electronic Music (Belgium), and the Austrian Research Institute for Artificial Intelligence (OFAI, Austria). Currently, he is a senior researcher at the Department of Computational Perception (Johannes Kepler University, Austria). Grachten has published in and reviewed for numerous international journals and conferences, on topics related to machine learning, music information retrieval, affective computing, music cognition, and computational musicology. His current research focuses on computational modeling of musical expectation and expressive performance.

26 May 2017 - 10:38 | view
Participation of the MTG at Primavera Pro

The MTG participates next week at Primavera Pro, the professional section of Primavera Sound Festival in Barcelona, with two talks. Sergio Oramas and Frederic Font will present topics related with their research to the audience of the festival.

Sergio Oramas
Millions of people are using streaming services and this Big Data is an ideal fuel for Artificial Intelligence systems, as well as for of music recommendation systems. These systems work very well for relatively popular artists, but what happens to a band that is new or has very few followers? During this presentation we will show the opportunities offered by Artificial Intelligence offers and will reveal the initiatives that the industry is taking in this domain.
Frederic Font
Creative Commons offers a framework with which independent music artists, sound designers and other sound creators can release their music and sound effects under clear terms. In this talk we will explain the reasons why such audio content is not yet extensively used in the professional sector and possible solutions based on the work done in the AudioCommons project.


25 May 2017 - 14:15 | view
Participation of TELMI project in the Festa de la Ciencia

On May 27th at 7:20PM, TELMI project will be presented in the Festa de la Ciencia organised by the Barcelona town hall at Parc de la Ciutadella.

The presentation will be in the frame of Microxerrades (microtalks) section under the title Tecnologia per aprendre música.

This is the 11th edition of this festival that aims at disseminating the scientific knowledge and the technology innovation. The festival is free and open to the general public.

Photo: BarcelonaCiencia

23 May 2017 - 18:14 | view
Participation of AudioCommons project in a panel at Sonar+D

AudioCommons project will take part of the panel Creative Commons for the Creative Industries on June 15th, 3PM at Sonar+D.

In this panel we will discuss about different perspectives and specific examples that provide a vision on how Creative Commons content can be used by creative industries, create economical return for content creators and how to address specific legal aspects. The panel will be organized around questions previously submitted by the audience. Those questions can be posted on twitter with the hashtag #ACSonarPlusD. All the information can be found on the AudioCommons website.

Panel will be composed by the members from the Audio Commons consortium and external professionals:

Malcolm Bain is founding partner of id law partners, and he is specialized on the legal issues of open source software and content, including both developing and freeing software, establishing licensing strategies and IPR enforcement. Malcolm is member of the Free Software Foundation Europe, and FreeSoftware Chair of the Universitat Politécnica de Catalunya.

Emmanuel Donati is CEO at Jamendo, one of the biggest platforms for free independent music. He is in charge of a catalog of 600,000 tracks shared by 40,000 from all over the world, and works on various aspects of the strategy to make independent music more accessible and bring an alternative business model for musicians.

Roger Subirana is a composer and music producer that, apart from his personal compositions, creates music for cinema, tv, theatre, several audiovisual projects and advertisements. His work under Creative Commons and this fact has facilitated his international recognition and the possibility to license his work for important commercial brands and movies. He is one of the most successful artists in Jamendo platform, having more that 900.000 downloads and 6.5 million listens.

Frederic Font is a post-doc researcher at the Music TechnologyGroup of the Department of Information and Communication Technologies of Universitat Pompeu Fabra, Barcelona. His current research is focused on facilitating the reuse of audio content in music creation and audio production contexts. Complementarily to his research, Frederic is leading the development of Freesound and coordinating the EU funded Audio Commons Initiative.

22 May 2017 - 15:01 | view
New project to exploit music education technologies
The European Research Council has awarded Xavier Serra a Proof of Concept grant to complement the existing ERC Advanced Grant on the CompMusic project. This new project will be dedicated to promote the exploitation of a number of technologies that can support on-line music education.
The TECSOME project will develop an exploitable system to facilitate the assessment of the music exercises submitted by music students taking on-line courses. The system will integrate technologies developed within the CompMusic project that measure the similarity between musical audio recordings, and with it, there will be an effort to define a market strategy to exploit it. The goal is to develop an approach with which to scale up music performance courses to MOOC level.
Proof of Concept grants, worth €150,000 each and open to ERC grant holders, can be used to establish intellectual property rights, investigate business opportunities or conduct technical validation. Xavier Serra already got a Proof of Concept grant in 2015, CAMUT, in that case, to exploit other CompMusic results for the particular case of Indian music. 
The CompMusic project, funded with an Advanced grant of the ERC in 2010 will finish in June 2017. In this project a group of researchers led by Xavier Serra have worked on the automatic description of music by emphasizing cultural specificity, carrying research within the field of music information processing with a domain knowledge approach. They have developed information modelling techniques of relevance to several non-Western music cultures, contributing to the overall field of Music Information Retrieval and of relevance to music exploration and education. TECSOME is a natural step in the technology transfer goals of the CompMusic project.
19 May 2017 - 11:30 | view
Web application developer position at the MTG

The MTG is looking for a web application developer position to work within the EU-funded project RAPID-MIX.

Job description:
The selected candidate will be working on an online repository for multimodal data, assisting in its development as well as preparing application prototypes and demos in conjunction with the RAPID-MIX API.

• Back-end web development (Python, Flask, Docker, PostgreSQL)
• Some front-end development (Javascript, D3)
• Fluent in english (written and spoken)
• C++ experience, as well as experience working with sound and music technology is a plus

Starting date: immediate
Dedication: Full time (3 months) / part time (6 months)

How to apply:
Interested candidates should send a resume as well as a brief motivation letter addressed to Panos Papiotis (panos [dot] papiotis [at] upf [dot] edu).

16 May 2017 - 14:01 | view
Three MIR talks by researchers from McGill
22 May 2017
Gabriel Vigliensoni, Martha Thomae, and Jorge Calvo-Zaragoza, from McGill University, Canada, will present their research on Monday, May 22nd, at 3:30pm in room 55.309.
Gabriel Vigliensoni
Title: A case study with the Music Listening Histories Dataset: Do demographic, profiling, and listening context features improve the performance of automatic music recommendation systems?
Abstract: Digital music services provide us with real-time access to millions of songs. Automatic music recommendation systems offer us new ways to discover music. The systems, however, do not account for the context of music listening. The function of music in everyday life depends on the context of music listening. Incorporating information about people’s music listening habits can be used to improve the recommendations. During the discussion, I present my research on collecting music listening histories spanning half a million users, and I explain how insights generated from the data can improve prediction accuracy of a music recommendation model. 
Martha Thomae
Title: A Methodology for Encoding Mensural Music: Introducing the Mensural MEI Translator
Abstract: Polyphonic music from the Late Middle Ages (thirteenth century) and the Renaissance (fourteenth and fifteenth centuries) was written in mensural notation, a system of notation characterized by note durations that are context-dependent. Efforts have been made to encode this music in a machine-readable format, with the goal of preserving the repertoire in its original notation while still allowing for computational musical analysis. There are only a few formats that provide support for encoding this old system of notation, one of these formats is MEI (Music Encoding Initiative). Due to the inefficiency of hand coding music in general, and the added complication in mensural notation of interpreting the value of the notes while coding, we propose a methodology to facilitate this task of encoding the music into a Mensural MEI file through a tool we developed called the Mensural MEI Translator. The methodology allows the musicologist to enter the piece through a score-editor, instead of directly encoding it into a Mensural MEI file. Through a series of processes, this file is converted into a Mensural MEI file that encodes the piece in the original (mensural) notation. 
Jorge Calvo-Zaragoza
Title: Document Analysis for Music Scores with Deep Learning
Abstract: Content within musical documents is not restricted to notes but involves heterogeneous information such as symbols, text, staff lines, ornaments or annotations. Before any attempt at automatically recognizing the information on the scores with an Optical Music Recognition system, it is necessary to detect and classify each constituent layer of information into different categories. The greatest obstacle of this classification process is the high heterogeneity among music collections, which makes it difficult to propose methods that can be generalizable to a broad range of sources. This presentation discusses a data-driven document analysis framework based on the use of Deep Learning methods, namely Convolutional Neural Networks. It focuses on extracting the different layers within musical documents by categorizing the image at pixel level. 
The main advantage of the approach is that it can be used regardless of the type of document provided, as long as training data is available. We illustrate some of the capabilities of the framework by showing examples of common tasks that are frequently performed on images of musical documents. We believe that this framework will allow the development of generalizable and scalable automatic music recognition systems, thus facilitating the creation of large-scale browsable and searchable repositories of music documents.
16 May 2017 - 08:32 | view