News and Events

Seminar on repovizz at McGill

A seminar by Esteban Maestre, Marie Curie Fellow, McGill University + Universitat Pompeu Fabra;
With assistance by Quim Limona, Universitat Pompeu Fabra

DATE: Tuesday June 10, 2014 at 12:30pm
LOCATION: Room A832, New Music Building, 527 Sherbrooke Street West (CIRMMT)

ABSTRACTrepovizz is an integrated online system capable of structural formatting and remote storage, browsing, exchange, annotation, and visualization of synchronous multi-modal, time-aligned data. Motivated by a growing need for data-driven collaborative research, repovizz aims to resolve commonly encountered difficulties in sharing or browsing large collections of multi-modal datasets. At its current state, repovizz is designed to hold time-aligned streams of heterogeneous data: audio, video, motion capture, physiological signals, extracted descriptors, annotations, et cetera. Most popular formats for audio and video are supported, while CSV formats are adopted for streams other than audio or video (e.g. motion capture or physiological signals). The data itself is structured via customized XML files, allowing the user to (re-) organize multi-modal data in any hierarchical manner. Datasets are stored in an online database, allowing the user to interact with the data remotely through a powerful HTML5 visual interface accessible from any current web browser; this feature can be considered a key aspect of repovizz since data can be explored, annotated, or visualized from any location.

repovizz has been developed by the Music Technology Group of Universitat Pompeu Fabra in the context of large-scale research projects over the past few years, and now it is close to launching as beta. In this seminar we'll give an overview of the main capabilities of repovizz and its current state of development, followed by a short tutorial.

27 May 2014 - 09:39 | view
Sonar Festival 2014
12 Jun 2014 - 14 Jun 2014

Following the tradition, the MTG is behind a bunch of interesting events at Sonar Festival in Barcelona. For this year’s edition, that will take place from June 12th to 14th at Fira de Montjuïc, the MTG organises a great number of activities in the frame of the Sonar+D section.


Music Hack Day (MHD) – Thursday, June 12th at 9AM to 13th at 6PM (Thu to Fri)

This year’s edition presents news and some changes, here’s a little flavour of what’s happening:

  • We count with the participation of 20 amazing and innovative technologies, like the kickstarter Mogees that turns daily objects into music instruments or Xth sense that allows the sonification of muscles (among others).
  • The Giant Steps European project plays an important role this year, sending two challenges to hackers and carrying out user studies to better understand the creative process that participants follow for the conceptualisation of their hacks.
  • Thanks to the collaboration with Made makerspace, we’re facilitating the onsite design and manufacturing of anything needed by hackers.
  • Wayra, the popular startup incubator, is also putting their two cents in the MHD by helping hackers preparing the 3’ presentations.

We are really grateful to our sponsors for helping us putting this edition on: 7Digital, Deezer, GiantSteps, Kiics, MusixMatch, Native Instruments, Patchblocks, Rdio, Sendgrid, TECNIO and Universal Music.

Check out the MHD website for further details

‘MTG & FRIENDS’ stand at Market Lab – from June 12th to 14th (Thu to Sat)

Exhibition of several installations and demos of our technologies, spin-offs, students and Phonos-MTG grant:

With the occasion of the 5th anniversary of the Music Hack Day initiative, we are also exhibiting in our stand some of the best hacks in MHD since its first edition in London 2009. The list of best hacks was carefully selected by a group of MHD veterans and experts and is as follows:

Together with the selection of the best hacks, people visiting the MTG stand will also have the chance to try new interactive musical interfaces based on brain activity. Two of the hacks developed within the Neuro-track in last year’s MHD will be expanded both in technological and business aspects. The two selected applications are Play your Mood, a music explorer based on the users’ emotional states, and BrainLoops, a musical interface for patients with locked-in syndrome.

These projects will be technologically supported by Starlab, whereas Barcelona Activa will provide business support to the selected creators. This initiative is promoted by KiiCS, and the Observatori de Comunicació Científica (OCC) of UPF.

Further details available here.

Giant Steps panel discussion – June 12th (Thu) from 18.30 to 19.30

Sergi Jordà, Principal Investigator of the project, will chair a panel discussion about several topics related to the Giant Steps project in which will participate two experts in electronic instruments design from Native Instruments experts and two Red Bull Music Academy musicians.

Check out the GiantSteps project website for further details

Spectral Diffractions – Mies van der Rohe pavilion – June 12th to 14th (Thu to Sat)

The SMC master student Antònio Sa Pinto collaborates with the artist Edwin van der Heide in the sound installation “Spectral Diffractions”.

Further details available in the Sonar's website

21 May 2014 - 19:12 | view
new COFLA project

The MTG contributes to a new project on Computational Analysis of Flamenco Music (COFLA) coordinated by José Miguel Díaz Bañez (Escuela Técnica Superior de Ingeniería - Universidad de Sevilla) and funded by the Andalusian Government. The project started in February 2014 and has a duration of 4 years.

The MTG tasks will be devoted to of automatic transcription, similarity computation, automatic classification and synthesis of flamenco singing. Researchers involved are Nadine Kroher, Jordi Bonada, Sergio Oramas & Emilia Gómez.

Feb 2014-Feb 2018. COFLA2: Análisis Computacional de la Música Flamenca, Proyectos de Excelencia de la Junta de Andalucía, P12-TIC-1362

14 Mar 2014 - 09:43 | view
New IEEE/ACM TASP paper on multi-feature beat tracking

Our article on multi-feature beat tracking for the IEEE/ACM Transactions on Audio, Speech and Language Processing is now available online! This is a work carried leaded by Jose R. Zapata for his PhD thesis in collaboration with Mathew Davies from the SMC group in Porto, based on the idea of combining different experts, represented by periodicity from different onset detection functions, for beat estimation. This is a simple and clever idea, already used to combine different beat tracking algorithms and evaluate the difficulty of the task, that has been integrated in a different method.

Zapata, J. R., Davies M. E. P., & Gómez E. (2014). Multi-feature beat tracking. IEEE/ACM Transactions on Audio, Speech, and Language Processing. 22(4), 816 – 825. RTF, Tagged, XML, BibTex, Google Scholar

Abstract: A recent trend in the field of beat tracking for musical audio signals has been to explore techniques for measuring the level of agreement and disagreement between a committee of beat tracking algorithms. By using beat tracking evaluation methods to compare all pairwise combinations of beat tracker outputs, it has been shown that selecting the beat tracker which most agrees with the remainder of the committee, on a song-by-song basis, leads to improved performance which surpasses the accuracy of any individual beat tracker used on its own. In this paper we extend this idea towards presenting a single, standalone beat tracking solution which can exploit the benefit of mutual agreement without the need to run multiple separate beat tracking algorithms. In contrast to existing work, we re-cast the problem as one of selecting between the beat outputs resulting from a single beat tracking model with multiple, diverse input features. Through extended evaluation on a large annotated database, we show that our multi-feature beat tracker can outperform the state of the art, and thereby demonstrate that there is sufficient diversity in input features for beat tracking, without the need for multiple tracking models.

27 Feb 2014 - 20:51 | view
Melody Extraction Review published in the IEEE Signal Processing Magazine

Our review article on melody extraction algorithms for the IEEE Signal Processing Magazine is finally available online! The printed edition will be coming out in March 2014. This article provides an overview of approaches, challenges and applications for melody extraction from polyphonic music signals.

J. Salamon, E. Gómez, D. P. W. Ellis and G. Richard, “Melody Extraction from Polyphonic Music Signals: Approaches, Applications and Challenges“, IEEE Signal Processing Magazine, 31(2):118-134, Mar. 2014.

Abstract: Melody extraction algorithms aim to produce a sequence of frequency values corresponding to the pitch of the dominant melody from a musical recording. Over the past decade melody extraction has emerged as an active research topic, comprising a large variety of proposed algorithms spanning a wide range of techniques. This article provides an overview of these techniques, the applications for which melody extraction is useful, and the challenges that remain. We start with a discussion of ‘melody’ from both musical and signal processing perspectives, and provide a case study which interprets the output of a melody extraction algorithm for specific excerpts. We then provide a comprehensive comparative analysis of melody extraction algorithms based on the results of an international evaluation campaign. We discuss issues of algorithm design, evaluation and applications which build upon melody extraction. Finally, we discuss some of the remaining challenges in melody extraction research in terms of algorithmic performance, development, and evaluation methodology.

17 Feb 2014 - 13:00 | view
Participation to AES 53rd Conference on Semantic Audio

Frederic Font, Jordi Janer and Xavier Serra participate to the 53rd Conference on Semantic Audio of the Audio Engineering Society that takes place in London from January 26th  to the 29th, 2014.

Xavier has been invited to give a talk on CompMusic, entitled: "Creating Research Corpora for the Computational Study of Music: the case of the CompMusic Project", Frederic is giving a talk on his recent PhD research: "Audio clip classification using social tags and the effect of tag expansion", and Jordi presents a paper done with David S. Blancas on "Sound Retrieval from Voice Imitation Queries in Collaborative Databases".

21 Jan 2014 - 10:50 | view
TechTransfer position at the MTG through TECNIOspring

The MTG is part of a catalan initiative named TECNIO and through it there is a call for incorporating an experienced researcher interested in carrying out TechTransfer activities. 

TECNIOspring is a fellowship programme that provides financial support to individual mobility proposals presented by experienced researchers in liaison with a TECNIO centre (like our research group). Host institutions will offer fellows a stimulating and multidisciplinary scientific environment in which to develop their applied research projects with focus on technology transfer.

Fellows will be offered 2-year employment contracts in order to develop their applied research projects. Please note that this call presents a strong focus on TechTransfer, so candidates are required to have experience in applied research and / or technology transfer activities (at least 1 year). 

There are two types of fellowships:

  • Incoming - mobility for experienced researchers of any nationality willing to join our centre for 2 years. Candidates must hold a PhD and four additional years of full-time equivalent research experience; or eight years of full-time equivalent research experience.
  • Outgoing + return - Mobility outside Spain for experienced researchers of any nationality that reside in Catalonia willing to join a research or technology centre or R&D department of a private company for one year. This scheme will include areturn phase of one more year to the MTG. Candidates must hold a PhD.

Further details about the funding involved per fellowship, eligibility criteria and evaluation process are available in the programme leaflet. Those of you interesting in applying, please mtg [at] upf [dot] edu (subject: tecniospring) (send us )a briefing about the project you propose together with your CV.


14 Jan 2014 - 14:24 | view
Seminar by Julián Urbano on Evaluation in MIR
16 Jan 2014

Julián Urbano, postdoc at the MTG, will give a seminar on "Evaluation in (Music) Information Retrieval through the Audio Music Similarity task" on January 16th at 3:30pm in room 52.321.

Abstract: Test-collection based evaluation in (Music) Information Retrieval has been used for half a century now as the means to evaluate and compare retrieval techniques and advance the state of the art. However, this paradigm makes certain assumptions that remain a research problem and that may invalidate our experimental results. In this talk I will approach this paradigm as an estimator of certain probability distributions that describe the final user experience. These distributions are estimated with a test collection, computing system-related distributions assumed to reliably correlate with the target user-related distributions. Using the Audio Music Similarity task as an example, I will talk about issues with our current evaluation methods, the degree to which they are problematic, how to analyze them and improve the situation. In terms of validity, we will see how the measured system distributions correspond to the target user distributions, and how this correspondence affects the conclusions we draw from an experiment. In terms of reliability, we will discuss optimal characteristics of test collections and statistical procedures. In terms of efficiency, we discuss models and methods to greatly reduce the annotation cost of an evaluation experiment.

13 Jan 2014 - 17:35 | view
Maika the new vocaloid singer by Voctro Labs is now available!

Voctro Labs christmas' gift has arrived! MAIKA, the new female Vocaloid 3 Voice Library, is a virtual singer that allows you to create vocal parts on your computer without the need of recording a real singer. By simply entering melody, lyrics and expression parameters you'll be able to create lead vocals, vocal accompaniment, demo vocals, vocal effects; the possibilities are endless. MAIKA is designed to sing in Spanish, but contains a wide range of phonemes that will also cover parts of other languages like Portuguese, Italian, Catalan, English and Japanese.

MAIKA has a powerful feminine voice. In the lower registers she has a softer, more airy voice, while in the higher registers she has a more intense timbre. She has an extraordinarily broad pitch range, which switches from a chest voice to a head voice in the highest registers. This makes her voice suited for a large range of musical genres and styles.

You can directly download the edition or if you prefer you can also order the boxed limited edition from Voctro Labs' website.

20 Dec 2013 - 13:42 | view
Application open for Master and PhD programs of the UPF
19 Nov 2013 - 27 Jun 2014

From November 19th 2013 to June 27th 2014, the application is open for all the master's and doctoral programmes of the UPF for the 2014-2015 academic year.

For the Master in Sound and Music Computing you can find the information in here. To do a PhD at the MTG you have to enrol in the PhD program in Information and Communication Technologies and you can find the information in here


9 Dec 2013 - 00:29 | view