News and Events

Barcelona Music Hack Day 2013 - Neuroscience and Music (Special Track)
13 Jun 2013 - 14 Jun 2013

Barcelona Music Hack Day 2013
13th - 14th June 2013 - Sonar Festival (Sonar+D)

Neuroscience and Music (Special Track)

Get the chance to develop a new application interfacing music with the brain at the Barcelona Music Hack Day 2013!! We are looking for a new connection between neuroscience and music.

The Music Hack Day (MHD) is a 24 hour hacking session in which participants conceptualize, create and present their projects. Any Music Technology, i.e. software, mobile applications, hardware, artworks, web development, goes as long as it is music related. The MHD has been a great way to demonstrate the creativity around music that comes from the tech community. The past three years have seen more than 20 MHD events taking place around the world. Starting in London, it has spread across the world to Berlin, Amsterdam, Boston, Stockholm, San Francisco, Barcelona, New York, Sydney, Montreal... The MHD has gathered over 2000 participants, building hundreds of hacks and with over 125 music and tech companies supporting the events. The Music Technology Group (MTG) of Universitat Pompeu Fabra (UPF) has hosted the MHD in Barcelona since 2010, and currently it is organized in the frame of Sónar festival.

With the support of the EC funded project KiiCS (Knowledge Incubation in Innovation and Creation for Science) this year the Barcelona MHD will include a special neuroscience track that will aim at providing a set of useful tools and APIs to encourage hacks that bring together music, brain signals, Brain-Computer Interfaces, and other physiological sensors. Through this approach, we want to encourage the creation of new ways of music creation and interaction. In the same line, the MHD will offer a pre-event introductory workshop where the different hardware devices (BCI, Enobio, and other physiological sensors), which will be made available to participants of the MHD, and the related APIs will be presented to all participants interested on developing hacks within the neuroscience track.

This initiative is lead by the Music Technology Group (MTG) in collaboration with the Science Communication Observatory at UPF through the KiiCs project, and it is supported by the research group Synthetic, Perceptive, Emotive and Cognitive Systems (SPECS), also from UPF, and by Starlab Barcelona SL.

*** Important dates ***
Registration period: from April 15th to May 15th
N+MHD Workshop: Wednesday, June 12th 2013
Barcelona MHD: June 13th and 14th 2013

Come and build the future of music and neuroscience!

The Barcelona MHD is organized by MTG-UPF in the frame of Sónar+D. Original idea by Dave Haynes

12 Mar 2013 - 18:07 | view
Seminar by Jordi Janer on music signal source separation

Jordi Janer, from the MTG, gives a talk on "Methods for Music Signal Source Separation of Professionally Produced Recordings" on Thursday February 14th 2013 at 15:30h in room 52.321 of the Communication-Poblenou Campus of the UPF.

Abstract:
This presentation addresses the topic of Music Signal Source Separation. We show the outcome of an industrial joint-research project at the Music Technology Group of UPF. From an initial goal of removing the lead instrument from a professionally produced music recordings, we worked on a general framework for music signal modeling and separation. These methods introduce some novelties over the state-of-the-art, extending on approaches such as Non-negative Matrix Factoriztion (NMF). We present timbre classification for predominant pitch detection, vocal residual treatment, monophonic and polyphonic polytimbral source/filter models, harmonic/percussion separation. Our methods can be grouped in two different categories depending on the field of application: a) low-latency/low-computation and b) high-latency/high-computation. Several demos and potential uses of music source separation will be introduced in our talk.

Biography:
Jordi Janer is researcher at the Music Technology Group of the Universitat Pompeu Fabra in Barcelona. His research interests cover: audio signal processing with a focus on the human voice, source separation, applications for real-time music interaction and environmental sound analysis and soundscape modelling. Graduated in Electronic Engineering (2000), he started his career as DSP engineer at Creamware GmbH, (Germany, 2000-2003), designing and developing audio effects and virtual synthesizers. Joining later the UPF, he obtained the PhD degree in 2008. As a visiting researcher, he stayed at McGill University (Canada, 2005) and at Northwestern University (USA, 2009). His activity as a researcher and project manager in the past years involves various public-funded research projects (2004-2013), and joint-research collaborations with Yamaha Corp. (Japan). He is also cofounder in 2011 of Voctro Labs, a spin-off company specialized on voice processing solutions for the audiovisual media industry.

11 Feb 2013 - 16:07 | view
New funded project to change the way we enjoy classical music concerts

PHENICX (Performances as Highly Enriched aNd Interactive Concert eXperiences) is a STREP project coordinated by the MTG in collaboration with TU-Delft and funded by the European Commision.

The project will make use of the state-of-the-art digital multimedia and internet technology to make the traditional concert experiences rich and universally accessible: concerts will become multimodal, multi-perspective and multilayer digital artefacts that can be easily explored, customized, personalized, (re)enjoyed and shared among the users. The main goal is twofold: (a) to make live concerts appealing to potential new audience and (b) to maximize the quality of concert experience for everyone.

PHENICX will last 36 months starting the 1st of February 2013 and the partner institutions involved are TU Delft, Universitäet of Linz (JKU), Stichting Koninklijk Concertgebouworkest (RCO), VideoDock BV (VD), Oesterreichische Studiengesellschaft Fuer Kybernetik (OFAI) and Escola Superior de Música de Catalunya (ESMUC)

The MTG team, coordinated by Emilia Gómez and Alba B. Rosado, will bring its expertise on audio processing (Jordi Janer & Jordi Bonada), music information retrieval (Agustín Martorell & Juanjo Bosch) and music interaction (Carles Fernández & Sergi Jordà) to work in different research challenges such as source separation, acoustic rendering, music visualization and gesture-based music interaction.

6 Feb 2013 - 20:48 | view
New funded project to work on traditional music repertoires

SIGMUS, SIGnal Analysis for the Discovery of Traditional MUSic Repertories, is a new project of the MTG funded by the Spanish Ministry of Economy and Competitiveness. SIGMUS will last 36 months starting the 1st of February 2013 and will focus on the study of the melodic and rhythmic characteristics of Flamenco and Arab-Andalusian music repertoires by applying audio processing and semantic analysis methodologies.


5 Feb 2013 - 12:43 | view
Seminar by Bill Verplank on sketching metaphors

Bill Verplank, from CCRMA, will give a seminar on "Sketching Metaphors" on Thursday, February 7th, at 3:30pm in room 52.321.

Abstract:
In this seminar I will describe (sketch) some metaphors I have used to provide a framework for Interaction Design - examples will be drawn from the course that Max Mathews and I developed at CCRMA on designing music controllers (NIME).

About Bill Verplank:
Bill Verplank is a human factors engineer and designer educated in ME at Stanford and MIT. After four years teaching design at Stanford he spent 22 years in industry: at Xerox (user-interface) IDEO (product design) and Interval Research (haptics). He has been active as a visiting lecturer at Stanford (ME, CS, CCRMA), ID/IIT, TU/e, IDII, CIID and professionally in ACM: SIGCHI, DIS, TEI, NIME.

4 Feb 2013 - 17:15 | view
Web Interface Designer job at the MTG-UPF

At the MTG-UPF and in the context of the CompMusic project we are looking for a Web Interface Designer to be involved in the development of a system to browse and interact with audio collections. The system is an online web application that interfaces with musical data (audio, scores, editorial information) plus musical descriptions that are automatically obtained from the data.

The Web Designer will be responsible for the graphical and functional design elements of the system, creating and implementing attractive and effective website designs that provide the end user with an engaging experience.

Given that the work will involve many meetings and discussions with the researchers at the UPF, the candidate should live in the Barcelona area.

Required skills:

  • Experience in web and interface design, graphic design, web development, user interface design and user experience.
  • Have an innovative design approach to navigation and search of audiovisual media.
  • Experience in graphic development tools such as Photoshop, Illustrator or similars.
  • Software development skills using HTML/CSS/JS (recommended HTML5 and CSS3).
  • Proficiency in English.

Interested candidates should send a CV and examples of work done related to this job to Xavier Serra (xavier [dot] serra [at] upf [dot] edu (subject: Web%20Designer%20job) (email)).

25 Jan 2013 - 18:28 | view
Seminar by Geoffroy Peeters on annotating MIR corpora

Geoffroy Peeters, from IRCAM, will give a seminar on "Annotated MIR Corpora, MSSE search engine for music, Perceptual Tempo" on Thursday, January 24th, at 3:30pm in room 52.321.

Abstract:
In this talk I will focus on three recent topics studied at IRCAM.
 
The first concerns a proposal for the description of annotated MIR corpora. Considering that today, annotated MIR corpora are provided by various research labs or companies, each one using its own annotation methodology, concept definitions, and formats, it is essential to define precisely how annotations are supplied and described. We propose here a proposals for the axis against which corpora can be described.
 
The second concerns our experience in integrating music indexing technologies in a third-party search and navigation engine (Orange MSSE search engine). We explain the work performed for this in terms of – choice of the technology, - development of annotated corpora for training the systems, - HIM development, user tests.
 
The third concerns the estimation of perceptual tempo and the reduction of the so-called octave errors of tempo estimation algorithms. Using the data from Last-FM perceptual experiment, we model the relationship between a set of four audio features to the perceptual tempo using a GMM Regression technique. We show that this technique allows outperforming current tempo estimation algorithms.
 
References:
  • G. Peeters and K. Fort. "Towards a (better) definition of the description of annotated m.i.r. corpora," In Proc. of ISMIR, Porto, Portugal, October 2012.
  • G. Peeters, F. Cornu, D. Tardieu, C. Charbuillet, J. J. Burred, M. Ramona, M. Vian, V. Botherel, J.-B. Rault, and J.-P. Cabanal. "A multimedia search and navigation prototype, including music and video-clips," In Proc. of ISMIR, Porto, Portugal, October 2012.
  • G. Peeters and J. Flocon-Cholet. "Perceptual tempo estimation using gmm regression," In Proc. of ACM Multimedia/ MIRUM (Workshop on Music Information Retrieval with User-Centered and Multimodal Strategies), Nara, Japan, October 2012.
18 Jan 2013 - 13:21 | view
Music for Cochlear Implants concert
9 Feb 2013

Saturday February 9th, 2013 at 12PM

Auditori CAIXA FORUM (Barcelona)

Av. Francesc Guardia 6-8, Barcelona

Free admission

 

We invite you to participate in a unique experience in which researchers and musicians come together for the hearing impaired.

musIC is a project related with the research about music perception with cochlear implant devices, and aims to understand how these devices can be further developed. The cochlear implant is a medical implanted device that has been designed mainly to restore the perception of speech sounds, but still has many limitations in musical listening. With this objective we are organizing a concert specially designed considering the limitations when listening into music with cochlear implants. The concert is also intended for the general public.

We will hear pieces played with different instruments and formations: a string quartet and flute, soprano, piano, guitar and ReacTable, an interactive electronic instrument developed by the Music Technology Group.

With this concert will try to better understand how music is perceived. Attendees can contribute to research by participating in a survey about this musical experience.

Compositions: Civilotti Alejandro, Alejandro Fränkel, Sergio Naddei, Luis Nogueira.

Organizers: Music Technology Group (Universitat Pompeu Fabra) and Phonos Foundation. With the support of: Advanced Bionics.

More information: http://phonos.upf.edu/music and http://phonos.upf.edu/blog

 

 

9 Jan 2013 - 17:56 | view
Seminar by Uri Nieto on Music structure analysis

Uri Nieto, from the Music and Audio Research Lab of NYU, will give a talk on "Music Structure Analysis using Matrix Factorization" on Thursday, January 10th, at 3:30pm in room 52.321.

Abstract: We propose a novel and fast approach to discover structure in western popular music by using a specific type of matrix factorization that adds a convex constrain to obtain a decomposition that can be interpreted as a set of weighted cluster centroids. We show that these centroids capture the different sections of a musical piece (e.g. verse, chorus) in a more consistent and efficient way than classic non-negative matrix factorization. This technique is capable of identifying the boundaries of the sections and then grouping them into different clusters.

Biography: Oriol Nieto is currently pursuing a Ph.D degree in Music Technology at the New York University, supervised by Morwaread Farbood and Juan Bello. He is a guitarist, violinist, composer, and music technologist. He received a B.S. in computer science from Polytechnic University of Catalonia (UPC), an M.S. in Information Technologies, Communication, and Audiovisual Media from Pompeu Fabra University (UPF), and an M.A. in Music, Science and Technology from Stanford University. His main interests are Music Information Retrieval, Music Cognition, Machine Learning, and Mobile Music.

7 Jan 2013 - 09:49 | view
PhD positions at the MTG

The MTG is opening 5 funded PhD positions to work within some of its research areas, with a starting date of September 2013. The candidates have to apply to the PhD program of the Department of Information and Communication Technologies. They have to demonstrate an academic and research background in the area they are applying for and have to submit a research proposal on a specific topic. The areas and topics for which we offer the funded positions are:

Sound and Music Communities (responsible faculty: Xavier Serra): Within the CompMusic project we have two open positions to work on computational approaches for the understanding of the relationship between lyrics and music in one or several of the following music traditions: Hindustani (North India), Carnatic (South India), Turkish-makam (Turkey), Andalusian (Maghreb) and Beijing Opera (China). In the context of Freesound we also have one open position to work on issues related to community profiling, linked data, automatic data structuring and sound ontologies.

Sound and Music Description (responsible faculty: Emilia Gomez): We have one open position to work on one of these two topics: (1) Computer-assisted transcription, similarity and classification of flamenco singing, using signal processing, machine learning and user modeling methodologies. Spanish funded project, SIGMUS. (2) Music & Autobiographical Memory, dealing with audio feature extraction, music recommendation and user preferences' modeling.

Musical and Advanced Interaction (responsible faculty: Sergi Jordà): We have one open position to work on tangible and tabletop interaction.

Before making the application the candidate needs the support of the faculty member responsible for the research area chosen. Interested people should first send a CV and a motivation letter to the faculty member identified.

3 Jan 2013 - 13:05 | view
intranet