- M. Sordo, A. Chaachoo, and X. Serra. “Creating Corpora for Computational Research in Arab-Andalusian Music”.
- B. Uyar, H. Sercan Atlı, S. Şentürk, B. Bozkurt, and X. Serra. “Corpora for Computational Research of Turkish Makam Music”.
- A. Porter and X. Serra. "An analysis and storage system for music research datasets".
News and Events
Participation to DLFM 2014
Mohamed Sordo and Alastair Porter participate in the 1st International Digital Libraries for Musicology workshop (DLfM 2104) that takes place on September 12th, 2014, in London (UK) in conjunction with the ACM/IEEE Digital Libraries conference 2014. These are the papers presented in which the MTG partcipates and that are all in the context of CompMusic:
MOOC on Audio Signal Processing for Music Applications
In collaboration with Prof. Julius Smith from Stanford University, Xavier Serra has put together a 10 weeks long course on Audio Signal Processing for Music Applications in the Coursera online platform. The course will start on October 1st and the landing page is https://www.coursera.org/course/audio.
The course focuses on the spectral processing techniques of relevance for the description and transformation of sounds, developing the basic theoretical and practical knowledge with which to analyze, synthesize, transform and describe audio signals in the context of music applications.
The course is based on open software and content. The demonstrations and programming exercises are done using Python under Ubuntu, and the references and materials for the course come from open online repositories. The software and materials developed for the course are also distributed with open licenses.
Performance on flamenco, mathematics and technology
Nadine Kroher, researcher from the COFLA and SIGMUS project within the Sound and Music Description area of the MTG, is performing in an event on September 26th 2014 devoted to flamenco music and technology organized in Seville, together with other researchers from the COFLA project.
A "cantaor" singing, analyzed in real time to visualize acoustic aspects related to his particular style and to automatically detect the flamenco style and variant.
Program in Spanish: Flamenco, Matemáticas y Tecnología musical(US). Se interpretan varios estilos de cante flamenco y se analizan desde el punto de vista matemático-computacional. Se realiza en vivo una muestra del funcionamiento de los programas informáticos en desarrollo, capaces de reconocer los cantes interpretados. Al final, se realiza una actuación flamenca a modo de resolución festera.
Responsable científico: José Miguel Díaz Báñez.
Journal article published in Frontiers in Cognitive Science
Our open access journal article on string quartet interdependence for the Performance Science topic of Frontiers in Cognitive Science is available online! The article proposes and evaluates a computational methodology for quantifying the amount of interdependence among the members of a string quartet, in terms of four distinct dimensions of the performance (Intonation, Dynamics, Timbre and Tempo).
Papiotis P., Marchini M., Perez-Carrillo A. and Maestre E. (2014) Measuring ensemble interdependence in a string quartet through analysis of multidimensional performance data. Front. Psychol. 5:963. doi: 10.3389/fpsyg.2014.00963
Abstract: In a musical ensemble such as a string quartet, the musicians interact and influence each other's actions in several aspects of the performance simultaneously in order to achieve a common aesthetic goal. In this article, we present and evaluate a computational approach for measuring the degree to which these interactions exist in a given performance. We recorded a number of string quartet exercises under two experimental conditions (solo and ensemble), acquiring both audio and bowing motion data. Numerical features in the form of time series were extracted from the data as performance descriptors representative of four distinct dimensions of the performance: Intonation, Dynamics, Timbre, and Tempo. Four different interdependence estimation methods (two linear and two nonlinear) were applied to the extracted features in order to assess the overall level of interdependence between the four musicians. The obtained results suggest that it is possible to correctly discriminate between the two experimental conditions by quantifying interdependence between the musicians in each of the studied performance dimensions; the nonlinear methods appear to perform best for most of the numerical features tested. Moreover, by using the solo recordings as a reference to which the ensemble recordings are contrasted, it is feasible to compare the amount of interdependence that is established between the musicians in a given performance dimension across all exercises, and relate the results to the underlying goal of the exercise. We discuss our findings in the context of ensemble performance research, the current limitations of our approach, and the ways in which it can be expanded and consolidated.
Seminar by Mark Sandler on Semantic Audio
8 Sep 2014
Mark Sandler, from the Centre for Digital Music of Queen Mary University of London, gives a seminar on monday September 8th 2104 at 15:00h in room 55.309 on "Semantic Audio: combining semantic web technology with audio analysis".
Abstract: The seminar will present some of the latest research from the Centre for Digital Music in Semantic Audio, where appropriate by means of demos. These will include the use of semantic linked data to create music browsing applications, the use of content analysis in recording studios to improve the quality of audio features and music informatics applications, and music recommendation based on mood. It will end with a few ideas on Computational Audio - where computer science meets audio processing.
Bio: Professor Mark Sandler has been applying Digital Signal Processing to problems in audio and music since the late 1970s, and is one of the pioneers of the area known as Music Informatics. He currently specialises in the use of Semantic Technologies for Audio and Music. He has published over 400 papers and graduated over 30 PhD students. He was the Principal Investigator of the pioneering UK-funded OMRAS2 project (omras2.org) and the local PI on SIMAC, which was led from UPF. He recently completed a collaborative grant with BBC and I Like Music in the area of music and emotion, named Making Musical Mood Metadata (http://www.bbc.co.uk/rd/projects/making-musical-mood-metadata) which explored the use of mood in music recommendation systems, and has just started a 5 year grant, Fusing Audio and Semantic Technologies for Intelligent Music Production and Consumption. He is currently Chief Scientist of the Centre for Digital Music.
Joint PhD fellowship available
1 Sep 2014 - 15 Sep 2014
Joint PhD fellowship on “Understanding the effect of evoked emotions in long-term memory” at the Department of Information Technologies and Communications DTIC-UPF
Human visual perception emerges from complex information processing taking place in the brain. Nonetheless, since our perceptual experience arises with apparent ease, we are often unaware of such complexity. Vision is an active process in that detailed representations of our visual world are only built from actively scanning our eyes with a series of saccades and fixations. The process of actively scanning a visual scene while looking for something in a cluttered environment is known as visual search. The study of visual search processes by means of eye-tracking and EEG recordings not only offers a unique opportunity to gain fundamental insights into visual information processing in the human brain, but also opens new avenues to assess cognitive function and its relation to normal aging and age-related cognitive pathologies.
Students with a strong background in mathematics, computer science, or physical sciences are particularly encouraged to apply. The applicants must hold an MSc degree in Computer Science, Physics, Applied Math, Cognitive Science, Psychology or related discipline. Proficiency in both written and spoken English is required.
HOW TO APPLY?
Interested people should send a resume as well as an introduction letter to laura [dot] dempere [at] upf [dot] edu and rafael [dot] ramirez [at] upf [dot] edu
Research/development position at MTG-UPF
This position will involve working with a team at MTG-UPF in Barcelona to develop audio signal processing applications related to the analysis and characterization of instrumental sounds.
Starting date: mid september 2014
Duration: 12 months with option to renew
MSc degree in Computer Science, Electrical Engineering or similar educational qualification
Experience in audio signal processing, machine learning and scientific programming (Python/C++)
Proficiency in both written and spoken English
Experience using Essentia and Freesound.org
Music education and experience in playing a musical instrument
Familiarity with web technologies
The Music Technology Group of the Universitat Pompeu Fabra is a leading research group with more than 40 researchers, carrying out research on topics such as audio signal processing, sound and music description, musical interfaces, sound and music communities, and performance modeling. The MTG wants to contribute to the improvement of the information and communication technologies related to sound and music, carrying out competitive research at the international level and at the same time transferring its results to society. To that goal, the MTG aims at finding a balance between basic and applied research while promoting interdisciplinary approaches that incorporate knowledge from both scientific/technological and humanistic/artistic disciplines. For more information on MTG-UPF please visit http://mtg.upf.edu
HOW TO APPLY?
Interested people should send a resume as well as an introduction letter to mtg [at] upf [dot] edu (subject: Research%2Fdevelopment%20position)
PhD fellowship on “Audio-Visual Approaches for Music Content Description”
The Music Technology Group and the Image Processing Group of the Department of Information and Communication Technologies, Universitat Pompeu Fabra in Barcelona are opening a joint PhD fellowship in the topic of “Audio-Visual Approaches for Music Content Description” to start in the Fall of 2014.
Music is a highly multimodal concept, where various types of heterogeneous information are associated to a music piece (audio,musician’s gestures and facial expression, lyrics, etc.). This has recently led researchers to apprehend music through its various facets, giving rise to multimodal music analysis studies.
The goal of this fellowship is to research on the complementarity of audio and image description technologies to improve the accuracy and meaningfulness of state of the art music description methods. These methods are the core of content-based music information retrieval tasks. Several standard tasks could benefit from it: Structural analysis and segmentation, Discovery of Repeated Themes & Sections, Music similarity computation and music retrieval, Genre / style classification, Artist identification and Emotion (Mood) Characterization.
This PhD will be linked to ongoing funded research projects at the MTG and GPI, such as PHENICX (Performances as Highly Enriched aNd Interactive Concert eXperiences), 'Inpainting Tools for Video Post-production. Variational theory and fast algorithms', SIGMUS (SIGnal analysis for the discovery of traditional MUSic repertoire) and MTM2012-30772.
Applicants should have experience in audio and image signal processing, and hold a MSc in a related field (e.g.
The grant involves teaching assistance, so interest for teaching is also valued.
Interested candidates should send a CV and motivation letter to Prof. Emilia Gómez (emilia [dot] gomez [at] upf [dot] edu) and Prof. Gloria Haro (gloria [dot] haro [at] upf [dot] edu) and include in the subject [PhD Audio-Visual].
Application deadline: September 1st 2014
Participation to NIME'2014
The 14th International Conference on New Interfaces for Musical Expression (NIME), took place at Goldsmiths University of London between June 30th and July 3rd. The MTG took part in it presenting three papers:
Presentations of PhD proposals
30 Jun 2014
On June 30th 2014 we have the defences of the thesis proposals of 5 first-year PhD students of the MTG. The presentations are open to everyone.
10:15h - Nadine Kroher (Supervisor: Emilia Gomez). Title of proposal: "Computational Transcription, Description and Analysis of the Flamenco Singing Voice". Room 55.410 (Tanger building)
11:00h - Marius Miron (Supervisor: Emilia Gomez). Title of proposal: "Source Separation and Signal Modeling of Orchestral Music Mixtures". Room 55.410 (Tanger building)
11:45h - Georgi Dhambazov (Supervisor: Xavier Serra). Title of proposal: "Analysis of timbral and phonetic characteristics of singing voice in the Turkish art music tradition". Room 55.410 (Tanger building)
12:30h - Sergio Oramas (Supervisor: Xavier Serra). Title of proposal: "Harvesting, Structuring and Exploiting Social Data in Music Information Retrieval". Room 55.410 (Tanger building)
16:00h - Rafael Caro Repetto (Supervisor: Xavier Serra). Title of proposal: "Understanding Xipi and Erhuang. Analysis of the musical dimension of Jingu Arias". room 20.287 (Ciutadella campus)