News and Events

Performance on flamenco, mathematics and technology
Nadine Kroher, researcher from the COFLA and SIGMUS project within the Sound and Music Description area of the MTG, is performing in an event on September 26th 2014 devoted to flamenco music and technology organized in Seville, together with other researchers from the COFLA project.

A "cantaor" singing, analyzed in real time to visualize acoustic aspects related to his particular style and to automatically detect the flamenco style and variant.

Program in Spanish: Flamenco, Matemáticas y Tecnología musical(US). Se interpretan varios estilos de cante flamenco y se analizan desde el punto de vista matemático-computacional. Se realiza en vivo una muestra del funcionamiento de los programas informáticos en desarrollo, capaces de reconocer los cantes interpretados. Al final, se realiza una actuación flamenca a modo de resolución festera.

Responsable científico: José Miguel Díaz Báñez.
Grupo de investigación Cofla: Computational analysis of the FLAmenco music de la Universidad de Sevilla.
Lugar: Salón de actos (Sala Chicarreros) de la Fundación CajaSol. Plaza de San Francisco.
Entrada libre hasta completar aforo

5 Sep 2014 - 14:01 | view
Journal article published in Frontiers in Cognitive Science

Our open access journal article on string quartet interdependence for the Performance Science topic of Frontiers in Cognitive Science is available online! The article proposes and evaluates a computational methodology for quantifying the amount of interdependence among the members of a string quartet, in terms of four distinct dimensions of the performance (Intonation, Dynamics, Timbre and Tempo).

Papiotis P., Marchini M., Perez-Carrillo A. and Maestre E. (2014) Measuring ensemble interdependence in a string quartet through analysis of multidimensional performance data. Front. Psychol. 5:963. doi: 10.3389/fpsyg.2014.00963

Abstract: In a musical ensemble such as a string quartet, the musicians interact and influence each other's actions in several aspects of the performance simultaneously in order to achieve a common aesthetic goal. In this article, we present and evaluate a computational approach for measuring the degree to which these interactions exist in a given performance. We recorded a number of string quartet exercises under two experimental conditions (solo and ensemble), acquiring both audio and bowing motion data. Numerical features in the form of time series were extracted from the data as performance descriptors representative of four distinct dimensions of the performance: Intonation, Dynamics, Timbre, and Tempo. Four different interdependence estimation methods (two linear and two nonlinear) were applied to the extracted features in order to assess the overall level of interdependence between the four musicians. The obtained results suggest that it is possible to correctly discriminate between the two experimental conditions by quantifying interdependence between the musicians in each of the studied performance dimensions; the nonlinear methods appear to perform best for most of the numerical features tested. Moreover, by using the solo recordings as a reference to which the ensemble recordings are contrasted, it is feasible to compare the amount of interdependence that is established between the musicians in a given performance dimension across all exercises, and relate the results to the underlying goal of the exercise. We discuss our findings in the context of ensemble performance research, the current limitations of our approach, and the ways in which it can be expanded and consolidated.

2 Sep 2014 - 18:10 | view
Seminar by Mark Sandler on Semantic Audio
8 Sep 2014

Mark Sandler, from the Centre for Digital Music of Queen Mary University of London, gives a seminar on monday September 8th 2104 at 15:00h in room 55.309 on "Semantic Audio: combining semantic web technology with audio analysis".

Abstract: The seminar will present some of the latest research from the Centre for Digital Music in Semantic Audio, where appropriate by means of demos. These will include the use of semantic linked data to create music browsing applications, the use of content analysis in recording studios to improve the quality of audio features and music informatics applications, and music recommendation based on mood. It will end with a few ideas on Computational Audio - where computer science meets audio processing.

Bio: Professor Mark Sandler has been applying Digital Signal Processing to problems in audio and music since the late 1970s, and is one of the pioneers of the area known as Music Informatics. He currently specialises in the use of Semantic Technologies for Audio and Music. He has published over 400 papers and graduated over 30 PhD students. He was the Principal Investigator of the pioneering UK-funded OMRAS2 project ( and the local PI on SIMAC, which was led from UPF. He recently completed a collaborative grant with BBC and I Like Music in the area of music and emotion, named Making Musical Mood Metadata ( which explored the use of mood in music recommendation systems, and has just started a 5 year grant, Fusing Audio and Semantic Technologies for Intelligent Music Production and Consumption. He is currently Chief Scientist of the Centre for Digital Music.



2 Sep 2014 - 13:53 | view
Joint PhD fellowship available
1 Sep 2014 - 15 Sep 2014

Joint PhD fellowship on “Understanding the effect of evoked emotions in long-term memory” at the Department of Information Technologies and Communications DTIC-UPF


Human visual perception emerges from complex information processing taking place in the brain. Nonetheless, since our perceptual experience arises with apparent ease, we are often unaware of such complexity. Vision is an active process in that detailed representations of our visual world are only built from actively scanning our eyes with a series of saccades and fixations. The process of actively scanning a visual scene while looking for something in a cluttered environment is known as visual search. The study of visual search processes by means of eye-tracking and EEG recordings not only offers a unique opportunity to gain fundamental insights into visual information processing in the human brain, but also opens new avenues to assess cognitive function and its relation to normal aging and age-related cognitive pathologies.

This successful applicant will study novel «cognitive signatures» derived from eye-tracking methods and EEG recordings, and will investigate the role of evoked-emotions in such signatures. The simultaneous acquisition of both eye-tracking and EEG recordings will allow the PhD candidate to investigate the effect of evoked emotions in long-term memory by linking brain activity with behavioral results. Throughout the project, music-evoked emotions will be considered. The opening is for a joint position from the Computational Neuroscience Group and the Music Technology Group at DTIC-UPF.

The PhD project will be closely related and supported by the funded research projects: TIN2013-40630-R, ComputVis@Cogn- Visual Search as a Hallmark of Cognitive Function: An Interdisciplinary Computational Approach.

Starting date: Fall 2014
Duration: 3 years


Students with a strong background in mathematics, computer science, or physical sciences are particularly encouraged to apply. The applicants must hold an MSc degree in Computer Science, Physics, Applied Math, Cognitive Science, Psychology or related discipline. Proficiency in both written and spoken English is required.


Interested people should send a resume as well as an introduction letter to laura [dot] dempere [at] upf [dot] edu and rafael [dot] ramirez [at] upf [dot] edu

Deadline for applications is September 15th 2014. Interested candidates are welcome to contact laura [dot] dempere [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) (laura[dot]dempere[at]upf)laura [dot] dempere [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) ([dot])laura [dot] dempere [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) (edu) and rafael [dot] ramirez [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) (rafael)laura [dot] dempere [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) ([dot])rafael [dot] ramirez [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) (ramirez[at]upf)laura [dot] dempere [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) ([dot])rafael [dot] ramirez [at] upf [dot] edu (subject: Joint%20PhD%20fellowship) (edu) for further details.


1 Sep 2014 - 09:28 | view
Research/development position at MTG-UPF
This position will involve working with a team at MTG-UPF in Barcelona to develop audio signal processing applications related to the analysis and characterization of instrumental sounds.
Starting date: mid september 2014
Duration: 12 months with option to renew
Required skills/qualifications:

MSc degree in Computer Science, Electrical Engineering or similar educational qualification
Experience in audio signal processing, machine learning and scientific programming (Python/C++)
Proficiency in both written and spoken English
Preferred skills/experience:

Experience using Essentia and
Music education and experience in playing a musical instrument
Familiarity with web technologies


The Music Technology Group of the Universitat Pompeu Fabra is a leading research group with more than 40 researchers, carrying out research on topics such as audio signal processing, sound and music description, musical interfaces, sound and music communities, and performance modeling. The MTG wants to contribute to the improvement of the information and communication technologies related to sound and music, carrying out competitive research at the international level and at the same time transferring its results to society. To that goal, the MTG aims at finding a balance between basic and applied research while promoting interdisciplinary approaches that incorporate knowledge from both scientific/technological and humanistic/artistic disciplines. For more information on MTG-UPF please visit


Interested people should send a resume as well as an introduction letter to mtg [at] upf [dot] edu (subject: Research%2Fdevelopment%20position)
28 Jul 2014 - 14:51 | view
PhD fellowship on “Audio-Visual Approaches for Music Content Description”

The Music Technology Group and the Image Processing Group of the Department of Information and Communication Technologies, Universitat Pompeu Fabra in Barcelona are opening a joint PhD fellowship in the topic of “Audio-Visual Approaches for Music Content Description” to start in the Fall of 2014.


Music is a highly multimodal concept, where various types of heterogeneous information are associated to a music piece (audio,musician’s gestures and facial expression, lyrics, etc.). This has recently led researchers to apprehend music through its various facets, giving rise to multimodal music analysis studies.

The goal of this fellowship is to research on the complementarity of audio and image description technologies to improve the accuracy and meaningfulness of state of the art music description methods. These methods are the core of content-based music information retrieval tasks. Several standard tasks could benefit from it: Structural analysis and segmentation, Discovery of Repeated Themes & Sections, Music similarity computation and music retrieval, Genre / style classification, Artist identification and Emotion (Mood) Characterization.

This PhD will be linked to ongoing funded research projects at the MTG and GPI, such as PHENICX (Performances as Highly Enriched aNd Interactive Concert eXperiences), 'Inpainting Tools for Video Post-production. Variational theory and fast algorithms', SIGMUS (SIGnal analysis for the discovery of traditional MUSic repertoire) and MTM2012-30772.


Applicants should have experience in audio and image signal processing, and hold a MSc in a related field (e.g.
telecommunications, electrical engineering, physics, mathematics, or computer science). Experience in scientific programming (Matlab/Python/C++) and excellent English are essential. Musical background and expertise on multimedia information retrieval will be considered.

The grant involves teaching assistance, so interest for teaching is also valued.


Interested candidates should send a CV and motivation letter to Prof. Emilia Gómez (emilia [dot] gomez [at] upf [dot] edu) and Prof. Gloria Haro (gloria [dot] haro [at] upf [dot] edu) and include in the subject [PhD Audio-Visual].
They will also have to apply to the PhD program of the DTIC of the UPF

Application deadline: September 1st 2014
Starting date: starting on October 15th 2014

More information:

28 Jul 2014 - 09:29 | view
Participation to NIME'2014

The 14th International Conference on New Interfaces for Musical Expression (NIME), took place at Goldsmiths University of London between June 30th and July 3rd. The MTG took part in it presenting three papers:

7 Jul 2014 - 09:06 | view
Presentations of PhD proposals
30 Jun 2014

On June 30th 2014 we have the defences of the thesis proposals of 5 first-year PhD students of the MTG. The presentations are open to everyone.

10:15h - Nadine Kroher (Supervisor: Emilia Gomez). Title of proposal: "Computational Transcription, Description and Analysis of the Flamenco Singing Voice". Room 55.410 (Tanger building)

11:00h - Marius Miron (Supervisor: Emilia Gomez). Title of proposal: "Source Separation and Signal Modeling of Orchestral Music Mixtures". Room 55.410 (Tanger building)

11:45h - Georgi Dhambazov (Supervisor: Xavier Serra). Title of proposal: "Analysis of timbral and phonetic characteristics of singing voice in the Turkish art music tradition". Room 55.410 (Tanger building)

12:30h - Sergio Oramas (Supervisor: Xavier Serra). Title of proposal: "Harvesting, Structuring and Exploiting Social Data in Music Information Retrieval". Room 55.410 (Tanger building)

16:00h - Rafael Caro Repetto (Supervisor: Xavier Serra).  Title of proposal: "Understanding Xipi and Erhuang. Analysis of the musical dimension of Jingu Arias". room 20.287 (Ciutadella campus)

20 Jun 2014 - 16:22 | view
Participation to FMA 2014
Nadine Kroher, Georgi Dzhambazov, Sertan Şentürk and Xavier Serra participate to the 4th International Workshop on Folk Music Analysis that takes place in Istanbul, Turkey, on June 12th and 13th, 2014. 
They are presenting the following work done at the MTG:  
10 Jun 2014 - 17:37 | view
SMC master thesis defense
1 Jul 2014 - 4 Jul 2014

What? SMC master students 2013/2014 defend their final projects on July 1st-4th

Where? Room 55.309 Tanger building

Detailed schedule:

Student name   Title   Supervisor   Date   Hour 
Aram Estiu Graugés   Animal vocalization analysis/synthesis   Jordi Janer and Jordi Bonada   July 1st   9:30  
S.I. Mimilakis   Voice quality modelling with the Wide-Band Harmonic Sinusoidal Modeling Algorithm   Jordi Bonada   July 1st   10:00  
Roger Rios Rubiras   A comparative study of speech dereverberation algorithms on music signals for interactive remixing applications   Stanislaw Gorlow and Jordi Janer   July 1st   10:30  
Oriol Romaní Picas   Score alignment in recordings from large ensembles   Julio J. Carabías-Orti and Jordi Janer   July 1st   11:15  
Charalambos Christopoulos   Augmented music performance by gestural recognition in 3D space using Polhemus sensors   Alfonso Perez   July 1st   11:45  
Giacomo Herrero Coli   Supervised music structure segmentation/annotation   Joan Serrà   July 2nd   9:30  
Toros Ufuk Senan   Preservation and study of ancient wood musical instruments stored in museums and conservatories.   Enric Guaus and Paul Poletti   July 2nd   10:00  
Nuno Hespanhol   Automatic Classification of Musical Sounds   Xavier Serra and Frederic Font   July 2nd   10:30  
Constantinos A. Dimitriou   Similarity Measures for Audio Classes   Xavier Serra and Frederic Font   July 2nd   11:15  
Vignesh Ishwar   Prominent pitch analysis for the study of vocal melodies in music   Xavier Serra   July 2nd   11:45  
Andrés Pérez López   Real time tools for 3d audio spatialization   Daniel Arteaga   July 3rd   9:30  
Nicholas Harley   Evaluation of Pitch-Class Set Similarity Measures for Tonal Analysis   Agustín Martorell   July 3rd   10:00  
Belén Nieto Núñez   Melody Extraction: addressing user satisfaction and context-awareness   Emilia Gómez   July 3rd   10:30  
Jorge A. Cuarón   Perceptual Validation of Chord Estimation Evaluation Standards   Agustín Martorell   July 3rd   11:15  
Jaime Parra Damborenea   ReactBlocks: A 3D Tangible Interface for Music Learning   Sergi Jordà and Cárthach Ó Nuanáin   July 4th   9:30  
Hazar Emre Tez   Symbolic Modular 2D GUIs with Physical Properties   Sergi Jordà and Cárthach Ó Nuanáin   July 4th   10:00  
Daniel Gómez Marín   Smart Percussive spaces   Sergi Jorda   July 4th   10:30  
Marcel Schmidt   A Musical Interface for People with Motor Disabilities   Zacharias Vamvakousis   July 4th   11:15  
Francisco Rodríguez Algarra   Audio-based computational stylometry for electronic music   Perfecto Herrera   July 4th   11:45  
Erim Yurci   Emotion detection from EEG signals: correlating cerebral cortex activity with music evoked emotion   Rafael Ramirez   July 4th   12:15  
Urbez Caplabo Harmony high level features Perfecto Herrera and Sergi Jordà  July 3rd  13:00


10 Jun 2014 - 17:03 | view