Real-time Machine Listening and Segmental Re-synthesis for Networked Music Performance
The general scope of this work is to investigate potential benefits of Networked Music Performance (NMP) systems by employing techniques commonly found in Machine Musicianship. Machine Musicianship is a research area aiming at developing software systems exhibiting some musical skill such as listening, composing or performing music. A distinct track of this research line, mostly relevant to this work, is computer accompaniment systems. Such systems are expected to accompany human musicians by causally analysing the music being performed and timely responding by synthesizing an accompaniment, or the part of one or more of the remaining members of a performance ensemble. The objective of the present work is to investigate the possibility of representing each performer of a dispersed NMP ensemble, by a local computer-based musician, which constantly listens to the local performance, receives network notifications from remote locations and re-synthesizes the performance of remote peers. Whenever a new musical construct is recognized at the location of each performer, a code representing that construct is communicated to all of the remaining musicians, as low-bandwidth information. Upon reception, the remote audio signal is re-synthesized by splicing pre-recorded audio segments corresponding to the musical construct identified by the received code. Computer accompaniment systems may use any conventional audio synthesis technique to generate the accompaniment. In this work, investigations focus on concatenative music synthesis, in an attempt to preserve all expressive nuances introduced by the interpretation of individual performers. Hence, the research carried out and presented in this dissertation lies on the intersection of three domains, which are NMP, Machine Musicianship and Concatenative Music Synthesis.
The dissertation initially presents an analysis of the current trends in all three research domains, and then elaborates on the methodology that was followed to realize the intended scenario. Research efforts have led to the development of BoogieNet, a preliminary software prototype implementing the proposed communication scheme for networked musical interactions. Real-time music analysis is achieved by means of audio-to-score alignment techniques and re-synthesis at the receiving end takes place by concatenating pre-recorded and automatically segmented audio units, generated by means of onset detection algorithms. The methodology of the entire process is presented and contrasted with competing analysis/synthesis techniques. Finally, the dissertation presents important implementation details and an experimental evaluation to demonstrate the feasibility of the proposed approach.
Artikel und Abstracts in Fachzeitschriften
Alexandraki, Chrisoula & Bader Rolf, “Anticipatory Networked Communications for Live Musical Interactions of Acoustic Instruments”, in: Journal of New Music Research 45(1), 2016, S. 68—85.
Alexandraki, Chrisoula & Bader, Rolf, “Using Computer Accompaniment to Assist Networked Music Performance”(Abstract), in: J. Audio Eng. Soc. 61(12), 2013, S. 1057.
Alexandraki, Chrisoula & Bader, Rolf, “Real-time concatenative synthesis for networked musical interactions”(Abstract), in: J. Acoust. Soc. Am. 133(5), 2013, S. 3367.
Bücher und Buchkapitel
Alexandraki, Chrisoula: Real-time Machine Listening and Segmental Re-synthesis for Networked Music Performance [PDF], Hamburg, Univ., Diss., 2013.
Alexandraki, Chrisoula & Bader, Rolf, “Real-time concatenative synthesis for networked musical interactions”, in:
Proc. Mtgs. Acoust. (POMA) 19(1), International Congress on Acoustics (ICA) 2013, Montreal, Jun 2013, paper number 035040.
Alexandraki, Chrisoula & Bader, Rolf, “Using Computer Accompaniment to Assist Networked Music Performance”, in: Audio Engineering Society Conference: 53rd International Conference: Semantic Audio, London, Jan 2014, paper number pp1-9.
Förderungen und Auszeichnungen
- Best Poster Award at the AES 53rd Conference on Semantic Audio 2014
- in the Computer Music Laboratory
- on Google Scholar
- on LinkedIn
- on ResearchGate