News and Events

Participation to VSGAMES 2016

Some prototypes were presented at VSGAMES 2016 - 8th International Conference on Virtual Worlds and Games for Serious Applications by Álvaro Sarasúa and Jordi Janer, members of the MIR-lab@MTG.

These games where developed in the context of the PHENICX project and with the goal of interacting with classical music concerts.

Janer, J., Gómez E., Martorell A., Miron M., & de Wit B. (2016). Immersive Orchestras: audio processing for orchestral music VR content. VSGAMES 2016 - 8th International Conference on Virtual Worlds and Games for Serious Applications. Abstract

Sarasúa, Á., Melenhorst M., Julià C. F., & Gómez E. (2016). Becoming the Maestro - A Game to Enhance Curiosity for Classical Music. 8th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games 2016).

 

12 Sep 2016 - 09:42 | view
Participation to SMC 2016
Olga Slizovskaia, Sertan Şentürk, Rong Gong and Juanjo Bosch will participate to the 13th Sound and Music Computing Conference that takes place in Hamburg from August 31st to September 3rd 2016. They will be presenting the following papers:
29 Aug 2016 - 10:35 | view
Talk on factor analysis for audio classification tasks by Hamid Eghbal-zadeh
1 Aug 2016
On Monday, 1st of August at 15:00h in room 55.410 there will be a talk by Hamid Eghbal-zadeh (Department of Computational Perception, Johannes Kepler University of Linz, Austria) on "A small footprint for audio and music classification".
 
Abstract: In many audio and music classification tasks, the aim is to provide a low-dimensional representation for audio excerpts with a high discrimination power to be used as excerpt-level features instead of the audio feature sequence. One approach would be to summarize the acoustic features into a statistical representation and use it for classification purposes. A problem of many of the statistical features such as adapted GMMs is that they are very high dimensional and also capture unwanted characteristics about the audio excerpts which does not represent their class. Using Factor Analysis, the dimensionality can be dramatically reduced and the unwanted factors can be discarded from the statistical representations. The state-of-the-art in many speech-related tasks use a specific factor analysis to extract a small footprint from speech audios. This fixed-length low-dimensional representation is known as i-vector. I-vectors are recently imported in MIR and have shown a great promise. Recently, we won the Audio Scene Classification challenge (DCASE-2016) using i-vectors. Also, we will present our noise-robust music artist recognition system via i-vector features at ISMIR-2016.
28 Jul 2016 - 16:00 | view
Large participation of the MTG at ISMIR 2016
16 MTG researchers participate to the 17th International Society for Music Information Retrieval Conference (ISMIR 2016) that takes place in New York from August 7th to the 11th 2016. ISMIR is the world’s leading research forum on processing, searching, organizing and accessing music-related data. MTG's main contributions are the presentations of 11 papers in the main program, 2 tutorials, and 2 papers in the satellite workshop DLFM 2016.
 
Here are the papers presented as part of the main program:
 
 
Here are the tutorials that MTG people are organizing and involved in:
27 Jul 2016 - 10:47 | view
Korg releases a new Tuner with the collaboration of the MTG
Korg has announced the TM-50TR, a Tuner / Metronome / Tone Trainer device that detects not only the pitch, but also the volume and tone of the sound as a performer plays. The Tone Trainer function is based on KORG's new ARTISTRY technology. This is proprietary technology for analyzing and evaluating sound that was developed through cooperative research under the supervision of Xavier Serra, Director of the Music Technology Group at the Pompeu Fabra University in Barcelona, Spain. 
 
In addition to its high precision as a tuner, the TM-50TR features a new "Tone Trainer" function that can evaluate the players sound in even greater detail. When the performer plays a sustained note on her instrument, the TM-50TR will detect not only the pitch, but also the dynamics (volume) and brightness (tonal character). These three elements are displayed in the TM-50TR’s meter in real time. When the performer finishes playing the note, the stability of each of these three elements is shown in a graph, allowing you to see at a glance whether your sound is stable. 
 
By analyzing these three basic elements of sound, including tuning, the TM-50TR can identify which aspects of the performers playing need improvement, thus helping you practice more efficiently. 
tm-50tr
25 Jul 2016 - 09:44 | view
Best paper award at NIME 2016

A paper presented by MTG researchers (Cárthach Ó Nuanáin, Sergi Jordà & Perfecto Herrera) has received the "best paper award" in the 16th International Conference on New Interfaces for Musical Expression, one of the most relevant and influential in the area of music technology, which was held recently in Brisbane, Australia.

The paper "An Interactive Software Instrument for Real-time Rhythmic Concatenative Synthesis" describes an approach for generating and visualising new rhythmic patterns from existing audio in real-time using concatenative synthesis. A graph-based model enables a novel 2-dimensional visualisation and manipulation of new patterns that mimic the rhythmic and timbral character of an existing target seed pattern. A VST audio plugin has been implemented using the reported research and has got positive acceptance not only in Brisbane's presentation but also in other non-academic meetings like Sonar+D and Music Tech Fest.

22 Jul 2016 - 15:29 | view
Keynote at IMS Conference 2016

Xavier Serra gives a keynote at the Conference of the International Musicological Society that takes place from July 1st to the 6th 2016 in Stavanger, Norway.

Title: The computational study of a musical culture through its digital traces

Abstract: From most musical cultures there are digital traces, digital artefacts, that can be processed and studied computationally and this has been the focus of computational musicology for already several decades. This type of research requires clear formalizations and some simplifications, for example, by considering that a musical culture can be conceptualized as a system of interconnected entities. A musician, an instrument, a performance, or a melodic motive, are examples of entities and they are linked through various types of relationships. We then need adequate digital traces of the entities, for example a textual description can be a useful trace of a musician and a recording a trace of a performance. The analytical study of these entities and of their interactions is accomplished by processing the digital traces and by generating mathematical representations and models of them. But a more ambitious goal is to go beyond the study of individual artefacts and analyze the overall system of interconnected entities in order to model a musical culture as a whole. The reader might think that this is science fiction, and she might be right, but there is research trying to advance in this direction. In this article we overview the challenges involved in this type of research and review some results obtained in various computational studies that we have carried out of several music cultures. In these studies, we have used audio signal processing, machine learning, and semantic web methodologies to describe various characteristics of the chosen musical cultures.
 
29 Jun 2016 - 23:09 | view
Best papers awards at FMA 2016 and at CBMI 2016

In the same week, two papers from the MTG obtained the best paper award in two conferences. Georgi Dzhambozov, first author, obtained the best paper award at FMA 2016 for the paper entitled "Automatic Alignment of Long Syllables In A cappella Beijing Opera".  Jordi Pons, first author, obtained the best paper award at CBMI 2016 for the paper entitled "Experimenting with Musically Motivated Convolutional Neural Networks".

24 Jun 2016 - 17:31 | view
Participation to the Data-driven Knowledge Extraction Workshop at UPF

Several members of the MTG present their research projects at the next María de Maeztu DTIC-UPF Data-driven Knowledge Extraction Workshop that takes place at the UPF on June 28th-29th 2016. The workshop is open to the public, free registration at www.upf.edu/mdm-dtic.

Here are the presentations with MTG participation:

 

21 Jun 2016 - 16:55 | view
Participation to CBMI 2016

Jordi Pons participates to the 14th International Workshop on Content-based Multimedia Indexing (CBMI 2016) that takes place in Bucharest from June 15th to the 17th 2016. He is presenting the following article:

15 Jun 2016 - 10:33 | view
intranet