Note: This bibliographic page is archived and will no longer be updated. For an up-to-date list of publications from the Music Technology Group see the Publications list .

Visual music transcription of clarinet video recordings trained with audio-based labelled data

Title Visual music transcription of clarinet video recordings trained with audio-based labelled data
Publication Type Conference Paper
Year of Publication 2017
Conference Name ICCV 2017 Workshop on Computer Vision for Audio-Visual Media (CVAVM)
Authors Zinemanas, P. , Arias P. , Haro G. G. , & Gómez E.
Conference Start Date 23/10/2017
Conference Location Venice, Italy
Abstract Automatic transcription is a well-known task in the music information retrieval (MIR) domain, and consists on the computation of a symbolic music representation (e.g. MIDI) from an audio recording. In this work, we address the automatic transcription of video recordings when the audio modality is missing or it does not have enough quality, and thus analyze the visual information. We focus on the clarinet which is played by opening/closing a set of holes and keys. We propose a method for automatic visual note estimation by detecting the fingertips of the player and measuring their displacement with respect to the holes and keys of the clarinet. To this aim, we track the clarinet and determine its position on every frame. The relative positions of the fingertips are used as features of a machine learning algorithm trained for note pitch classification. For that purpose, a dataset is built in a semiautomatic way by estimating pitch information from audio signals in an existing collection of 4.5 hours of video recordings from six different songs performed by nine different players. Our results confirm the difficulty of performing visual vs audio automatic transcription mainly due to motion blur and occlusions that cannot be solved with a single view.
preprint/postprint document https://zenodo.org/record/848650