News and Events

Seminar by Henkjan Honing on music cognition

Henkjan Honing, from the University of Amsterdam, will give a seminar on "Music, Cognition and the Origins of Musicality" on April 26th, Friday, at 15:30h in room 55.309.

Abstract: While it recently became quite popular to address the study of the origins of music from an evolutionary perspective, there is still little agreement on the idea that music is in fact an adaptation, that it influenced our survival, or that it made us sexually more attractive. Music appears to be of little use. So why argue that music is an adaptation? While it is virtually impossible to underpin the evolutionary role of musicality as a whole, the apparent innateness, and the species and cognitive specificity of its hypothesized components allow prof. dr Henkjan Honing to outline what makes us musical animals.

22 Apr 2013 - 16:01 | view
Seminar by Fabien Gouyon on a critical take on MIR

Fabien Gouyon, from INESC-Porto, will give a seminar on thursday April 25th 2013 at 3:30pm in room 55.321 on "Are we there yet?! A critical take on some Music Information Retrieval technologies."

Abstract: The amount and availability of professionally-produced or user-generated media is continuously increasing; the ways we interact with media today and how we expect to do it tomorrow are profoundly changing. In order to empower these changes, Information Technologies deal to a large extent with tackling new issues emerging in media processing. For instance, Music Information Retrieval (MIR) is a fast-paced multidisciplinary research community focusing on the processing of an ubiquitous, yet particularly challenging type of media: Music. Typical MIR tasks include e.g. automatic music genre recognition, inferring "tags" or tracking beats from audio signals, and applications range from playlist generation to personalized music recommendation. Scientific publications regularly report improvements in such tasks, and for a number of those, reported results appear to be reaching very high performances. In this talk, I will focus on a critical overview of some results reported in the MIR community, and argue that there is still much road ahead of us until MIR technologies will be truly useful in reliable, large-scale IT systems.

19 Apr 2013 - 16:35 | view
Ines Salselas defends her PhD thesis on April 26th

Ines Salselas defends her PhD thesis entitled "Exploring interactions between music and language during the early development of music cognition. A computational modelling approach" on Friday 26th of April 2013 at 11:00h in room 55.309.

The jury members of the defense are: Fabien Gouyon (INESC Porto), Henkjan Honing (University of Amsterdam), Nuria Sebastian (UPF).

Abstract: This dissertation concerns the computational modelling of early life development of music perception and cognition. Experimental psychology and neuroscience show results that suggest that the development of musical representations in infancy, whether concerning pitch or rhythm features, depend on exposure both to music and language. Early musical and linguistic skills seem to be, therefore, tangled in ways we are yet to characterize. In parallel, computational modelling has produced powerful frameworks for the study of learning and development. The use of these models for studying the development of music information perception and cognition, connecting music and language still remains to be explored.
This way, we propose to produce computational solutions suitable for studying factors that contribute to shape our cognitive structure, building our predispositions that allow us to enjoy and make sense of music. We will also adopt a comparative approach to the study of early development of musical predispositions that involves both music and language, searching for possible interactions and correlations. With this purpose, we first address pitch representation (absolute vs relative) and its relations with development. Simulations have allowed us to observe a parallel between learning and the type of pitch information being used, where the type of encoding that was being used influenced the ability of the model to perform a discrimination task correctly. Next, we have performed a prosodic characterization of infant-directed speech and singing by comparing rhythmic and melodic patterning in two Portuguese (European and Brazilian) variants. In the computational experiments, rhythm related descriptors exhibited a strong predictive ability for both speech and singing language variants’ discrimination tasks, presenting different rhythmic patterning for each variant. This reveals that the prosody of the surrounding sonic environment of an infant is a source of rich information and rhythm as a key element for characterizing the prosody from language and songs from each culture. Finally, we propose a computational model based on temporal information processing and representation for exploring how the temporal prosodic patterns of a specific culture influence the development of rhythmic representations and predispositions. The simulations show that exposure to the surrounding sound environment influences the development of temporal representations and that the structure of the exposure environment, specifically the lack of maternal songs, has an impact on how the model organizes its internal representations.
We conclude that there is a reciprocal influence between music and language. It is from the exposure to the structure of the sonic background that we shape our cognitive structure, which supports our understanding of musical experience. Among the sonic background, language’s structure has a predominant role in the building of musical predispositions and representations.

18 Apr 2013 - 18:13 | view
Registration for the Neuro Music Hack Day is now open!
15 Apr 2013 - 15 May 2013

In addition to the regular Music Hack Day track, this year’s BCN Music Hack Day includes a special Neuro-track that aims at providing a set of useful tools, hardware and APIs to encourage hacks that bring together music, brain signals and other physiological sensors.

Through this approach, we want to encourage the creation of new ways of music creation and interaction using physiological signals. In the same line, the MHD will offer a pre-event introductory workshop on June 12th where the different hardware devices (BCI, Enobio, and other physiological sensors), which will be made available to participants of the MHD, and the related APIs will be presented to all participants interested on developing hacks within the neuroscience track.

In the registration form you can either choose the Regular Music Hack Day Track or the Neuro Music Hack Day Track, depending on your background and/or hacking interests. Space for this event is limited so we can't guarantee everyone a spot. We'll try to keep things on a first come, first served basis, whilst at the same time ensuring that we have a good mix of people who are prepared to build something or contribute in some other valuable way. We'll send out a confirmation email as the date approaches and will accommodate as many hackers as we can.

15 Apr 2013 - 11:11 | view
Freesound: 8th anniversary

Today is the 8th anniversary of Freesound! Congratulations to the researchers, developers, moderators, donors and users who have been and are part of this project for the success achieved so far. Thanks to the collaboration of this great community we have now more than 160.000 free sounds from all over the world.

Freesound was started in 2005 in our research group. One of the first aims behind it was to create an open repository of sounds to be used for scientific and artistic research, but quickly became a very popular site used by a wide variety of people to share sounds and the experiences around them. Users upload sounds recorded or created by themselves and share them under Creative Commons licenses. The sounds in Freesound have good quality, are free and legal, so professionals from different fields (cinema, music, videogames, software...) as well as amateur users, use them in their works.

Freesound has now 3.5 million registered users around the world, about 40.000 unique visits a day, and the total number of unique visits in those eight years has been more than 57 million, coming from almost all the countries of the world.

There are many successful usages of the sounds in Freesound, for example in the movie Children of Men and in a song of the internationally known band The Prodigy. Freesound has received several awards, such as a BMW award, a Barcelona City award and a Google research award twice. But what is most important, during all those years users have shared really amazing sounds. Find the opinion of the community about the coolest sounds on Freesound: http://www.freesound.org/forum/freesound-project/33568/

The interest for sharing sounds keeps growing and the community of active users of Freesound is also growing. For the MTG, Freesound offers an exceptional framework with which to carry research in the context of semantic web technologies. But we are specially proud of being able to offer a very useful service to the society.

5 Apr 2013 - 09:26 | view
TONAS: a new dataset of flamenco a cappella sung melodies with corresponding manual transcriptions

As a little Friday gift, we're glad to announce the release of a new dataset of flamenco singing: TONAS

The dataset includes 72 sung excerpts representative of three a cappella flamenco singing styles, i.e. Tonás (Debla and two variants of Martinete), together with manually corrected fundamental frequency and note transcriptions.

This collection was built by the COFLA team in the context of our research project for melodic transcription, similarity and style classification in flamenco music.


Further information about the music collection, how the samples were transcribed and by who, is available on the dataset website, where you can of course download the audio, metadata and transcription files.


We hope that you find this collection useful, whether for automatic transcription of the singing voice or any other research topic (e.g. pitch estimation, onset detection, melodic similarity, singer identification, style classification), and we hope this dataset will increase the interest of our scientific community on the particular challenges of flamenco singing.


We would be very interested to receive your feedback.

Best regards,

The COFLA team

15 Mar 2013 - 12:11 | view
Barcelona Music Hack Day 2013 - Neuroscience and Music (Special Track)
13 Jun 2013 - 14 Jun 2013

Barcelona Music Hack Day 2013
13th - 14th June 2013 - Sonar Festival (Sonar+D)

Neuroscience and Music (Special Track)

Get the chance to develop a new application interfacing music with the brain at the Barcelona Music Hack Day 2013!! We are looking for a new connection between neuroscience and music.

The Music Hack Day (MHD) is a 24 hour hacking session in which participants conceptualize, create and present their projects. Any Music Technology, i.e. software, mobile applications, hardware, artworks, web development, goes as long as it is music related. The MHD has been a great way to demonstrate the creativity around music that comes from the tech community. The past three years have seen more than 20 MHD events taking place around the world. Starting in London, it has spread across the world to Berlin, Amsterdam, Boston, Stockholm, San Francisco, Barcelona, New York, Sydney, Montreal... The MHD has gathered over 2000 participants, building hundreds of hacks and with over 125 music and tech companies supporting the events. The Music Technology Group (MTG) of Universitat Pompeu Fabra (UPF) has hosted the MHD in Barcelona since 2010, and currently it is organized in the frame of Sónar festival.

With the support of the EC funded project KiiCS (Knowledge Incubation in Innovation and Creation for Science) this year the Barcelona MHD will include a special neuroscience track that will aim at providing a set of useful tools and APIs to encourage hacks that bring together music, brain signals, Brain-Computer Interfaces, and other physiological sensors. Through this approach, we want to encourage the creation of new ways of music creation and interaction. In the same line, the MHD will offer a pre-event introductory workshop where the different hardware devices (BCI, Enobio, and other physiological sensors), which will be made available to participants of the MHD, and the related APIs will be presented to all participants interested on developing hacks within the neuroscience track.

This initiative is lead by the Music Technology Group (MTG) in collaboration with the Science Communication Observatory at UPF through the KiiCs project, and it is supported by the research group Synthetic, Perceptive, Emotive and Cognitive Systems (SPECS), also from UPF, and by Starlab Barcelona SL.

*** Important dates ***
Registration period: from April 15th to May 15th
N+MHD Workshop: Wednesday, June 12th 2013
Barcelona MHD: June 13th and 14th 2013

Come and build the future of music and neuroscience!

The Barcelona MHD is organized by MTG-UPF in the frame of Sónar+D. Original idea by Dave Haynes

12 Mar 2013 - 19:07 | view
Seminar by Jordi Janer on music signal source separation

Jordi Janer, from the MTG, gives a talk on "Methods for Music Signal Source Separation of Professionally Produced Recordings" on Thursday February 14th 2013 at 15:30h in room 52.321 of the Communication-Poblenou Campus of the UPF.

Abstract:
This presentation addresses the topic of Music Signal Source Separation. We show the outcome of an industrial joint-research project at the Music Technology Group of UPF. From an initial goal of removing the lead instrument from a professionally produced music recordings, we worked on a general framework for music signal modeling and separation. These methods introduce some novelties over the state-of-the-art, extending on approaches such as Non-negative Matrix Factoriztion (NMF). We present timbre classification for predominant pitch detection, vocal residual treatment, monophonic and polyphonic polytimbral source/filter models, harmonic/percussion separation. Our methods can be grouped in two different categories depending on the field of application: a) low-latency/low-computation and b) high-latency/high-computation. Several demos and potential uses of music source separation will be introduced in our talk.

Biography:
Jordi Janer is researcher at the Music Technology Group of the Universitat Pompeu Fabra in Barcelona. His research interests cover: audio signal processing with a focus on the human voice, source separation, applications for real-time music interaction and environmental sound analysis and soundscape modelling. Graduated in Electronic Engineering (2000), he started his career as DSP engineer at Creamware GmbH, (Germany, 2000-2003), designing and developing audio effects and virtual synthesizers. Joining later the UPF, he obtained the PhD degree in 2008. As a visiting researcher, he stayed at McGill University (Canada, 2005) and at Northwestern University (USA, 2009). His activity as a researcher and project manager in the past years involves various public-funded research projects (2004-2013), and joint-research collaborations with Yamaha Corp. (Japan). He is also cofounder in 2011 of Voctro Labs, a spin-off company specialized on voice processing solutions for the audiovisual media industry.

11 Feb 2013 - 17:07 | view
New funded project to change the way we enjoy classical music concerts

PHENICX (Performances as Highly Enriched aNd Interactive Concert eXperiences) is a STREP project coordinated by the MTG in collaboration with TU-Delft and funded by the European Commision.

The project will make use of the state-of-the-art digital multimedia and internet technology to make the traditional concert experiences rich and universally accessible: concerts will become multimodal, multi-perspective and multilayer digital artefacts that can be easily explored, customized, personalized, (re)enjoyed and shared among the users. The main goal is twofold: (a) to make live concerts appealing to potential new audience and (b) to maximize the quality of concert experience for everyone.

PHENICX will last 36 months starting the 1st of February 2013 and the partner institutions involved are TU Delft, Universitäet of Linz (JKU), Stichting Koninklijk Concertgebouworkest (RCO), VideoDock BV (VD), Oesterreichische Studiengesellschaft Fuer Kybernetik (OFAI) and Escola Superior de Música de Catalunya (ESMUC)

The MTG team, coordinated by Emilia Gómez and Alba B. Rosado, will bring its expertise on audio processing (Jordi Janer & Jordi Bonada), music information retrieval (Agustín Martorell & Juanjo Bosch) and music interaction (Carles Fernández & Sergi Jordà) to work in different research challenges such as source separation, acoustic rendering, music visualization and gesture-based music interaction.

6 Feb 2013 - 21:48 | view
New funded project to work on traditional music repertoires

SIGMUS, SIGnal Analysis for the Discovery of Traditional MUSic Repertories, is a new project of the MTG funded by the Spanish Ministry of Economy and Competitiveness. SIGMUS will last 36 months starting the 1st of February 2013 and will focus on the study of the melodic and rhythmic characteristics of Flamenco and Arab-Andalusian music repertoires by applying audio processing and semantic analysis methodologies.


5 Feb 2013 - 13:43 | view
intranet