News and Events

Article in IEEE Signal Processing Magazine on Expression control in singing voice synthesis

Martí Umbert and Jordi Bonada coauthor a journal article on the state of the art work on expression control in singing voice synthesis, that has just been published for the November issue of the IEEE Signal Processing Magazine. It reviews the features typically used to control singing voice expression, the approaches that have been proposed, how these are evaluated, and highlights the challenges that we currently foresee.

M. Umbert, J. Bonada. M. Goto. T. Nakano, and J. Sundberg, "Expression control in singing voice synthesis: Features, approaches, evaluation, and challenges," IEEE Signal Processing Mag., vol. 32, no. 6, pp. 55-73, Nov. 2015.

Abstract: In the context of singing voice synthesis, expression control manipulates a set of voice features related to a particular emotion, style, or singer. Also known as performance modeling, it has been approached from different perspectives and for different purposes, and different projects have shown a wide extent of applicability. The aim of this article is to provide an overview of approaches to expression control in singing voice synthesis. We introduce some musical applications that use singing voice synthesis techniques to justify the need for an accurate control of expression. Then, expression is defined and related to speech and instrument performance modeling. Next, we present the commonly studied set of voice parameters that can change perceptual aspects of synthesized voices. After that, we provide an up-to-date classification, comparison, and description of a selection of approaches to expression control. Then, we describe how these approaches are currently evaluated and discuss the benefits of building a common evaluation framework and adopting perceptually-motivated objective measures. Finally, we discuss the challenges that we currently foresee.

22 Oct 2015 - 14:03 | view
Four PhD positions at the MTG-UPF
The MTG of the Universitat Pompeu Fabra in Barcelona, offers four PhD positions to work within two new research projects, AudioCommons and TELMI, funded by the European Commission under the H2020 programme, with a starting date of February 2016. The candidates should have adequate academic and research backgrounds for the work to be done within the project. 
In the AudioCommons project ( the MTG, in collaboration with a number of academic and industrial partners, will develop technologies and tools to facilitate the use of Creative Commons audio content by the creative industries, enabling creation, access, retrieval and reuse of audio material in innovative ways. In terms of research the two PhD students to join this project would work on topics related to the automatic description of large sound collections, mainly using as use case. The research to be carried out should combine audio signal processing techniques for content analysis of the audio recordings with semantic web technologies for analyzing the contextual information related to the recordings. The candidates should be competent in fields such as audio signal processing, machine learning, and semantic technologies.
In the TELMI project ( the MTG in collaboration with a number of academic and industrial partners, will design and implement new multi-modal interaction paradigms for music learning and will develop assistive, self-learning, augmented-feedback, and social-aware prototypes complementary to traditional teaching. The two PhD students to join this project will work on topics related to machine learning, DSP, gesture capture and analysis, and computer interfaces for music learning. Candidates are expected to have strong experience in machine learning, DSP, and computer programming.
The exact research to be carried out by the PhD students will be decided considering the background and interests of the candidates. Interested people should send, a part from a CV and a motivation letter, a research proposal related to one of the two projects. 
In parallel to the acceptance by the MTG, the candidates will have to apply and be accepted to the PhD program of Department of Information and Information Technologies ( 
Send your applications, CV, research proposal and motivation letter, to mtg [at] upf [dot] edu (subject: PhD%20positions) .
19 Oct 2015 - 15:09 | view
TELMI: A new project to create interactive, assistive, self-learning and social-aware technologies for music learning
The aim of the TELMI project is to study how we learn musical instruments, taking the violin as a case study, from a pedagogical and scientific perspective and to create new interactive, assistive, self-learning, augmented-feedback, and social-aware systems complementary to traditional teaching. As a result of a tightly coupled interaction between technical and pedagogical partners, the project will attempt to answer questions such as “How will the musical instrument learning environments be in 5-10 years time?” and “What impact will these new musical environments have in instrument learning as a whole?” The general objectives of the TELMI project are: (1) to design and implement new interaction paradigms for music learning and training based on state-of-the-art multi-modal (audio, image, video and motion) technologies, (2) to evaluate from a pedagogical point of view the effectiveness of such new paradigms, (3) based on the evaluation results, to develop new multi-modal interactive music learning prototypes for student-teacher, student only, and collaborative learning scenarios, and (4) to create a publicly available reference database of multimodal recordings for online learning and social interaction among students. The results of the project will serve as a basis for the development of next generation music learning systems, thereby improving on current student-teacher interaction, student-only practice, and furthermore providing the potential to make music education and its benefits accessible to a substantially wider public.
19 Oct 2015 - 13:10 | view
AudioCommons: a new project to develop technologies for the reuse of open audio content
The Audio Commons initiative ( is aimed at promoting the use of open audio content and at developing technologies with which to support sound and music repositories, audio production tools and users of audio content. The developed technologies should enable the reuse of open audio material, facilitating its integration in the production workflows of the creative industries.
AudioCommons is supported by the European Commission with a project that will run from February 2016 to January 2019. The project is coordinated by the Music Technology Group of the Universitat Pompeu Fabra in Barcelona, and the project partners are the Centre for Digital Music from Queen Mary University of London, the University of Surrey, Jamendo, AudioGaming, and Waves. 
Within this project the consortium will carry out research and development efforts on the following topics:
  • Intellectual property and business models: commonly understood frameworks for publishing and master rights to particular audio and music recordings will be challenged within the Audio Commons Ecosystem (ACE), and we will research on making those challenges understandable, and ultimately, useful for the industry. In the first instance, this will involve understanding the rights management requirements in a high-reuse scenario such as the one we envision, and usage recommendations made as necessary. Research into emerging business models possibly created by ACE interaction with publishers/creators/consumers will also be carried out.
  • Audio Ontologies: an important part of the research in Audio Commons will be focused on defining an ontology for the unified annotation of audio content able to allow proper representation and retrieval of content in different use cases of the creative industries. We will ground the design of the ontology in requirements collected from the industry and extend existing work on multimedia semantic representation. The concepts of the ontology will serve as a guide for the semantic audio annotation technologies that will be further developed.
  • Semantic description of audio content: we will work on improving the state-of-the-art in sound and music description and semantic representation technologies. We will focus our research on aspects that have been usually overlooked in existing literature (such as the development of descriptors targeted to short music samples). Also, we will stress the development of reliable high-level semantic descriptors with the use of bigger and crowd-sourced datasets. We will on the one side focus on the description of music content (i.e. music pieces and music samples), and on the other side focus on the description of non-musical content such as sound effects.
19 Oct 2015 - 11:54 | view
Seminar by Ye Wang on Music Technology for health applications
19 Oct 2015
Prof. Ye Wang, from National University of Singapore, will give a seminar entitled "Sound, Music and Sensor Computing for Health and Wellbeing" on Monday, October 19, 2015, at 3:30 pm in room 55.410.
Abstract: The use of music as an aid in healing body and mind has received enormous attention over the last 20 years from a wide range of disciplines, including neuroscience, physical therapy, exercise science, and psychological medicine. We have attempted to transform insights gained from the scientific study of music and medicine into real-life applications that can be delivered widely, effectively, and accurately. We have been trying to use music in evidence-based and/or preventative medicine. In this talk, I will present some of our recent and ongoing projects which facilitate the delivery of established music-enhanced therapies, harnessing the synergy of sound and music computing (SMC), mobile/sensor computing, and cloud computing technologies to promote healthy lifestyles and to facilitate disease prevention, diagnosis, and treatment in both developed countries and resource-poor developing countries.
16 Oct 2015 - 15:33 | view
21 researchers from the MTG participate to ISMIR 2015
Twenty-one researchers from the MTG participate to this year’s International Society for Music Information Retrieval Conference that takes place in Malaga, Spain, from October 26th to 30th 2015. 
These are the papers that will be presented in the regular oral and poster sessions of the conference:
Also a number of MTG researchers are involved in a tutorial on Flamenco music and others will be involved in other events of the Conference such as the Late-breaking/Demos sessions or the music concerts.
14 Oct 2015 - 12:03 | view
Master thesis from SMC Master 2014-2015
14 Oct 2015 - 11:33 | view
Seminar by Julian O’Kelly on Music Therapy with Prolonged Disorders of Consciousness
14 Oct 2015

Julian O’Kelly, Royal Hospital for Neuro-disability, UK
Wendesday, October 14, 2015, 3:30 pm, room 55.410

Host: Rafael Ramirez

Title: Music Therapy with Prolonged Disorders of Consciousness: in Retrospect and Prospect

Music therapists have been researching the effects of music therapy with those in coma, vegetative and minimally conscious states for over thirty years. This talk will provide an overview of the development of contrasting music therapy approaches during this period, arguing the case for more standardised approaches using concensus nomenclature, and greater dialogue with neuroscience going forward. This perspective has informed the development of two neurophysiological and behavioral studies exploring the effects of music therapy in the assessment and rehabilitation for those with prolonged disorders of consciousness (PDOC). Preliminary findings will be presented from a cross-over study comparing the rehabilitative and prognostic potential of music therapy to preferred text narration, using a range of neurophysiological measures (EEG, heart rate variability, respiration) and video time sampled behavioral data. Findings will be discussed in relation to other models of practice, the complexity of the field of research, and the potential of music therapy as a tool for revealing what intact brain network activity exists in those with PDOC. Reflections will be provided on the relevance of these findings to the UK model of neuro-rehabilitation and the sustainability of music therapy in this competitive market.

14 Oct 2015 - 10:18 | view
II Jornada Música y Medicina - Musicoterapia en Oncología
MTG participa en la II Jornada Música y Medicina. L’Institut Català de Musicoterapia os invita a participar en la 2ª Jornada Música y Medicina, en esta ocasión dedicada a los avances de la Musicoterapia en oncología: “Musicoterapia y cáncer”. En esta jornada científica, que tiene lugar los días 15 y 16 de Octubre en el Parc de Recerca Biomèdica de Barcelona (PRBB).
14 Oct 2015 - 09:46 | view
International Workshop on Quantitative and Qualitative Music Therapy Research
15 Oct 2015
We are organizing the International Workshop on Quantitative and Qualitative Music Therapy Research (Q&QMT 2015). The Workshop will be held at the Universitat Pompeu Fabra, Tanger Building 55.309. The aim of the workshop is to promote fruitful collaboration among researchers, music therapists, musicians, psychologists and physicians who are interested in music therapy and its effects, evaluated by applying quantitative and qualitative methods. The workshop will provide the opportunity to learn about, present and discuss ongoing work in the area. We believe that this is a timely workshop because there is an increasing interest in quantitative and qualitative methods in music therapy. Full details can be found at
14 Oct 2015 - 09:40 | view