News and Events

Music Hack Day in Barcelona

Music Hack Day Barcelona, jointly organized by the MTG-UPF and Sónar, will be a satellite event of the Sónar PRO 2011 festival, held at the Barcelona Contemporary Culture Center (CCCB) on the 16th and 17th of June, 2011.

Music Hack Day is a session of hacking in which participants will conceptualize, create and present their projects: music + software + mobile + hardware + art + the web. Anything goes as long as it's music related!

In this Music Hack Day we will put a special emphasis on involving the artists community. If you are an artist and love creativity, culture and technology, please join us!!

 

 

28 Apr 2011 - 04:33 | view
Phonos: Concert and book presentation by Harry Sparnaak
Concert by Harry Sparnaak with clarinet and electronics plus presentation of his book "The bass clarinet (a personal story)" on Thursday April 14th at 19:30 in Sala Polivalent.
12 Apr 2011 - 07:13 | view
Seminar by Zbigniew Ras on automatic music indexing

On Thursday April 7th 2011 at 15:30 in room 52.321, Zbigniew Ras, from the University of North Carolina and Warsaw University of Technology, will give a research seminar on "Cascade classifiers for automatic music indexing".

Abstract: In a hierarchical decision system S, a group of classifiers can be trained using objects in S partitioned by values of the decision attribute at its all granularity levels. Then, attribute values only at the highest granularity level (corresponding granules are the largest) are used to split S into decision sub-systems where each one is built by selecting objects in S of the same decision value. These sub-systems are used for training new classifiers at all granularity levels of its decision attribute. Each sub-system is split further by sub-values of its decision value. The obtained tree-type structure with groups of classifiers assigned to each of its nodes is called a cascade classifier. In the area of automatic music indexing, this cascade classifier makes a first estimate at the highest level of decision attribute values, which stands for the musical instrument family. Then, the further estimation is done within that specific family range. Experiments have shown better performance of a cascade system than traditional flat classification methods which directly estimate the instrument without higher level of family information analysis. Also, we will introduce the new hierarchical instrument schema according to the clustering results of acoustic features. This new schema better describes the similarity among different instruments or among different playing techniques of the same instrument. The classification results show the higher accuracy of a cascade system with the new schema compared to the traditional schemas.

4 Apr 2011 - 16:56 | view
published the 2nd edition of DAFX book
The second edition of the book "DAFX: Digital Audio Effects", edited by Udo Zölzer, is out. The chapter on Spectral processing has been updated from the first edition by Jordi Bonada and Xavier Serra.
28 Mar 2011 - 13:46 | view
Seminar by Meinard Müller on Music Signal Processing

On Thursday March 24th 2011 at 15:30h in room 52.321, Meinard Müller, from the Max Planck Institute for Informatics, gives a talk on "New Developments in Music Signal Processing".

Abstract: Compared to speech signal processing, the field of music signal processing is a relatively young research discipline. Therefore, many techniques and representations have been transferred from the speech domain to the music domain. However, music signals possess specific acoustic and structural characteristics that are not shared by spoken language or audio signals from other domains. To account for musical dimensions such as pitch or rhythm, specialized audio features that exploit musical characteristics are indispensable in analyzing and processing music data. In fact, many tasks of music signal analysis only become feasible by exploiting suitable music-specific assumptions. In this talk, I address a number of feature design principles that account for various musical aspects. In particular, I show how chroma-based audio features can be enhanced by significantly boosting the degree of timbre invariance without degrading the features' discriminative power. Furthermore, I introduce a novel mid-level representation that captures dominant tempo and pulse information in music recordings. To highlight the practical and musical relevance, I discuss the various feature representations in the context of current music information retrieval tasks including music synchronization, beat tracking, and structure analysis. By giving many audio examples and presenting various prototypical user interfaces, this presentation is directed to a general audience.

21 Mar 2011 - 10:12 | view
Joan Serrà defends his PhD thesis on March 23rd

Joan Serrà defends his PhD thesis entitled "Identification of Versions of the Same Musical Composition by Processing Audio Descriptions" on Wednesday 23rd of March 2011 at 12:00h in room 55.309.

The members of the jury's defense are:  Climent Nadeu (UPC), Ricardo Baeza-Yates (Yahoo! Research and UPF), Meinard Müller (Saarland University & MPI für Informatik).

Thesis abstract: Automatically making sense of digital information, and specially of music dig- ital documents, is an important problem our modern society is facing. In fact, there are still many tasks that, although being easily performed by humans, cannot be effectively performed by a computer. In this work we focus on one of such tasks: the identification of musical piece versions (alternate renditions of the same musical composition like cover songs, live recordings, remixes, etc.). In particular, we adopt a computational approach solely based on the information provided by the audio signal. We propose a system for version identification that is robust to the main musical changes between versions, including timbre, tempo, key and structure changes. Such a system exploits nonlinear time series analysis tools and standard methods for quantitative mu- sic description, and it does not make use of a specific modeling strategy for data extracted from audio, i.e. it is a model-free system. We report remarkable accuracies for this system, both with our data and through an international evaluation framework. Indeed, according to this framework, our model-free approach achieves the highest accuracy among current version identification systems (up to the moment of writing this thesis). Model-based approaches are also investigated. For that we consider a number of linear and nonlinear time series models. We show that, although model-based approaches do not reach the highest accuracies, they present a number of advantages, specially with regard to computational complexity and parameter setting. In addition, we explore post-processing strategies for version identification systems, and show how unsupervised grouping algorithms allow the characterization and enhancement of the output of query-by-example systems such as the version identification ones. To this end, we build and study a complex network of versions and apply clustering and community detection algorithms. Overall, our work brings automatic version identification to an unprecedented stage where high accuracies are achieved and, at the same time, explores promising directions for future research. Although our steps are guided by the nature of the considered signals (music recordings) and the characteristics of the task at hand (version identification), we believe our methodology can be easily trans- ferred to other contexts and domains.

18 Mar 2011 - 14:02 | view
Phonos: Audiovisual concert
Concert on Tuesday March 22nd at 19:30 in the Espai Polivalent organized by Phonos and including works with audiovisual media, viola, flute and electronics.
17 Mar 2011 - 10:55 | view
Seminar by Emmanuel Vincent on audio source separation

On Thursday February 10th 2011, at 15:30h in room 52.321, Emmanuel Vincent, researcher from INRIA-Rennes, will give a talk on "A flexible framework for audio source separation".

Abstract: Source separation consists of extracting the signal produced by each sound source from a recording. It is a mainstream topic in music and audio processing, with applications ranging from speech enhancement and recognition to 3D music upmixing and post-production. In this talk, I will provide an overview of probabilistic model-based approaches, with a stronger focus on the recent variance modeling paradigm. I will show how this paradigm leads to a flexible audio source separation framework able to exploit a wide range of prior information about the sources and play several sound examples.

8 Feb 2011 - 10:51 | view
Open PhD and postdoctoral positions for the CompMusic project
CompMusic, Computational Models for the Discovery of the World's Music, is a research project funded by European Research Council and coordinated by Xavier Serra from the Music Technology Group of the Universitat Pompeu Fabra in Barcelona (Spain). The project will start on July 2011 and will last for five years.

The main goal of the project is to advance in the field of Music Computing by approaching a number of current research challenges from a multicultural perspective. It aims to advance in the description and formalization of music, making it more accessible to computational approaches and reducing the gap between audio signal descriptions and semantically meaningful music concepts. It will focus on the development of information modelling techniques applicable to non-western music repertories, developing computational models to represent culture specific music contexts.

CompMusic will approach these challenges by combining methodologies from disciplines such as Computational Musicology, Music Cognition, Information Processing and HCI. It will deal with a variety of information sources such as audio features, symbolic scores, text commentaries, user evaluations, etc…It will focus on some of the major non-western art-music traditions, specifically Indian (hindustani, carnatic), Turkish-Arab (ottoman, andalusian), and Chinese (han). The project will involve research teams and users immersed in the different music cultures.

At this point of the project we are looking to fill several post-doc positions at the MTG-UPF to work on topics related to Computational Musicology and Music Cognition. We are also looking to fill several PhD positions at the MTG-UPF to work on a variety of topics related to the project. The students will enrol in the PhD program of the Department of Information and Communication Technologies of the UPF. Interested applicants, for both postdoc and PhD positions, should send a CV and a letter of motivation, expressing the research interests in relation to the CompMusic project, to Xavier Serra (xavier [dot] serra [at] upf [dot] edu).

Relevant links:
Xavier Serra: http://www.dtic.upf.edu/~xserra/
Music Technology Group: http://mtg.upf.edu/
Universitat Pompeu Fabra: http://www.upf.edu/en
PhD program of the UPF: http://www.upf.edu/doctorats/en/programes/audiovisuals/presentacio/
European Research Council: http://erc.europa.eu/
7 Feb 2011 - 17:02 | view
Participation at the AES 41st Conference on Audio for Games

Oscar Mayor and Jordi Janer will participate at the AES 41st Conference on Audio for Games to be held 2nd-4th February 2011 in London, UK. This specialist conference provides a relevant and in-depth look at game-audio. It provides a forum for professionals and academics to communicate about issues that are pertinent in this field.

Oscar and Jordi will be presenting the following papers:

 

1 Feb 2011 - 11:36 | view
intranet