Title: Machine listening and learning for musical systems
Speaker: Nick Collins, University of Sussex
Host: Perfecto Herrera
Date: Thursday June 7th, 2012, 15:30pm
Location: room 52.321, Roc Boronat building
Abstract: Musical articial intelligences are playing an important role in new composition and performance systems. Critical to enhanced capabilities for such machine musicians will be listening facilities modeling human audition, and machine learning able to match the minimum 10000 hours or ten years of intensive practice of expert human musicians. Future musical agents will cope across multiple rehearsals and concert tours, or gather multiple commissions, potentially working over long musical lifetimes; they may be virtuoso performers and composers attributed in their own right, or powerful musical companions and assistants to human musicians.
In this presentation we'll meet a number of projects related to these themes. The concert system LL will be introduced, an experiment in listening and learning applied in works for drummer and computer, and electric violin and computer. Autocousmatic will be presented, an algorithmic composer for electroacoustic music which incorporates machine listening in its critic module. Large corpus content analysis work in music information retrieval shows great promise when adapted to concert systems and automated composers, and the SuperCollider library SCMIR will be demonstrated, alongside a new realtime polyphonic pitch tracker.
Title: Percussive/Harmonic/Singing Separation on Monaural Music Signals
Speaker: Pedro Vera Candeas and Francisco Jesús Cañadas Quesada, Universidad de Jaén
Host: Emilia Gómez
Date: Friday June 8th, 2012, 11:00am
Location: room 55.410, Tànger building
Abstract: This seminar is divided into two parts. In the first part, a brief review of current percussive/harmonic/singing separation techniques is presented. In the second part, a preliminary unsupervised algorithm for separating percussive, harmonic and singing voice components from monaural polyphonic signals is shown. The proposed algorithm, based on a modified Nonnegative Matrix Factorization (NMF) procedure, does not require any training stages to distinguish between percussive, harmonic and singing voice bases because useful information from percussive, harmonic and vocal sounds is integrated into the decomposition process. In this process, NMF is performed by assuming that harmonic sounds exhibit spectral sparseness and temporal smoothness, percussive sounds exhibit spectral smoothness and temporal sparseness while singing voice sounds is characterized by a source/filter model in which the constraint “only one source is active” is integrated. Promising results are obtained when the proposed approach was compared using commercial real-world signals.