JOINT R&D PROJECTS MTG - YAMAHA
Singing Voice Impersonator intended for the Karaoke environment. By using the traditional Spectral Modeling Synthesis (SMS) technique Elvis is able to transform in real-time the voice of an amateur singer and make it resemble the voice of a professional singer.
Novel Singing Voice Synthesizer based on performance sampling which implements spectral models especially adapted to the singing voice for achieving natural sounding transformations. Having as input a score containing lyrics and notes, it automatically concatenates small transformed snippets of recordings to generate a virtual synthesis performance. The several years’ long MTG-Yamaha collaboration in singing synthesis has resulted into Yamaha's Vocaloid synthesizer.
Audio processor able to perform professional-sounding time-scaling transformations of polyphonic material by a wide range of tempo changes. Using spectral techniques with a multiband approach, it is able to preserve both transients and the aural image.
By using advanced voice spectral processing techniques, the VocalProcessor VST plug-in is able to transform in real-time the voice of a singer and change its main characteristics. Its possible transformations include pitch transposition, timbre scaling, and excitation effects (breathiness, roughness). In addition it provides real-time harmonization up to four voices.
Singing Tutor is the name of a software program that evaluates expressively the performance of a user in a karaoke based environment. This software uses the ESPRESSO algorithms' library to analyze the user's singing voice, extract the most relevant spectral descriptors and its evolution in time and based in a reference annotated song, rate the singing performance taking into account the basic timing and pitch characteristics but also expressive aspects of the performance like vibratos, different attacks, releases, articulations, etc.
Audio mosaicing instrument. Slices from pre-existing audio loops are interchanged, based on their similarity, to create new sounds. The user controls the mosaicing process in an expressive, intuitive and direct manner.
Violin Synthesis based on Gesture Analysis
Synthesizer that produces an audio that aims to sound as if a violinist would have performed that score from reading a musical score. The main contribution of our approach is that instead of modeling the instrument or the sound itself, we focus on how the performer is controlling the instrument. The core of the research consists of 1) know how a score would be played predicting performer control gestures and 2) find out the relation between the controlling gestures and the sound produced.
A first prototype allows the user to control a singing voice synthesizer with his voice in real-time. After introducing the lyrics, the user sings controlling the voice of a virtual singer. A second prototype allows the user to control the sound of musical instruments. In this instrumental karaoke, the user voice can sound like a saxophone or a bass guitar.
The user will select a song from our database and the system will retrieve the available "versions" of it. It can also reveal some surprising "connections" between apparently unrelated songs.
Mood Playlist Generator
A software MP3 player that organizes a personal music collection according to "mood" and other functional criteria, and builds specialized playlists according to them.
Good Vibrations Plug-in
A winamp plug-in that allows the user to build his/her own "musical personomy"; the system learns to classify music using words or concepts that the user uses to describe a musical collection. Music recommendations are also generated using those concepts.
Sound FX Library Manager
Movie, Video Games and Audiovisual productions in general require Sound Effects. SFX Lib Manager is a search engine for sound effects that allows searching for sounds based on how they sound. it features common-sense knowledge engines to assist the browsing of the collection.
Search and discovery of music in multimillion track catalogues. Safari is a music search system that allows to explore music collections by a different set of "filters": mood (happy, sad, furious), production (acoustic/electronic), tempo (fast/medium/slow), genre, and so on automatically extracted from the music audio. It also allows for searches like "find me music similar to Pixies from bands in Singapore".
The first audio content-based search engine. It indexes the WWW for music files and allows for similar sounding music files.
Foafing the Music
A personalized music recommender that allows discovering the obscure artists hidden in the Long-Tail of popularity. The system also filters music related information according to the user profile. The engine enables: (1) to read album reviews from your recommended artists (2) to stream/download MP3-blogs and Podcast sessions.
The system listens in an unsupervised and online manner to a sonic stream and splits it up into basic events that are clustered and displayed for user exploration.
The system synthesizes the expected continuation of musical audio signals using concatenative synthesis. It is a tool for rendering the "sound fantasies" of cognition models.
Billaboop enables real-time transcription of percussive audio (arbitrary sounds, beat box, drums) into 3 drum categories. It can learn the sounds the user wants to play.
Real-time VST plugin for audio separation of stereo commercial music productions. It provides several separation criteria including panning, channel phase differences and amplitude variations. One of its possible applications is remixing songs where the user can independently modify the volume of each instrument.
Project that aims to create a huge collaborative database of audio snippets, samples, recordings, bleeps, ... released under the Creative Commons Sampling Plus License. The Freesound Project provides new and interesting ways of accessing these samples.
Expressive Music Performance
Expressive music performance system for monophonic Jazz melodies. The system is based on (1) a machine learning component which induces an expressive transformation model from a set of expressive recordings, and (2) a melody synthesis component which generates expressive monophonic output from inexpressive melody descriptions using the induced transformation model.
A collaborative electronic music instrument with a tabletop tangible multi-touch interface. Several simultaneous performers share complete control over the instrument by moving and rotating physical objects on a luminous round table surface. By moving and relating these objects, representing components of a classic modular synthesizer, users can create complex and dynamic sonic topologies, with generators, filters and modulators, in a kind of tangible modular synthesizer or graspable flow-controlled programming language.
Interactive TV is a personalized music TV channel that matches the users taste and is able to tune in the users mood. Users can get recommended music video streams, more like this, filter by mood, get the lyrics.
KaleiVoiceCope allows transforming your voice in real-time allowing a wide range of possibilities, from changing the gender or age of your voice to simulate a robotic or monster voice. The input voice is analyzed in the spectral domain, some spectral descriptors are extracted from it and based on a set of parameters, a new voice is generated changing the timbre, the amplitude, the pitch, and other spectral and physical characteristics. A set of buttons allow the user to select the desired transformation.
Music by Phonos Foundation
Audition of recordings of the following pieces from the last CD by Phonos:
- Gabriel Brncic: Quodlibet (Iñaki Alberdi, accordion)
- Lluis Callejo: A Pitágoras en re (one of the first pieces with computers of Phonos)
- Andrés Lewin-Richter: Caminando 2 (uses Vocaloid to generate 4 voices)
- Enrique Marín: Transiciones de fase (Jesús Jara, tuba)
- José M.Mestres Quadreny: Estro Aleatorio 3 (piece for typist and orchestra)
- Joan Sanmartí: Passadís (Lito Iglesias and his band, cellos)
- Wayne Siegel: Jackdaw (Carlos Gil, trombone)