Networked music performance
A networked music performance or network musical performance is a real-time interaction over a computer network that enables musicians in different locations to perform as if they were in the same room.[1] These interactions can include performances, rehearsals, improvisation or jamming sessions, and situations for learning such as master classes.[2] Participants may be connected by "high fidelity multichannel audio and video links"[3] as well as MIDI data connections[1] and specialized collaborative software tools. While not intended to be a replacement for traditional live stage performance, networked music performance supports musical interaction when co-presence is not possible and allows for novel forms of music expression.[2] Remote audience members and possibly a conductor may also participate.[3]
History
One of the earliest examples of a networked music performance experiments was the 1951 piece: “Imaginary Landscape No. 4 for Twelve Radios” by composer John Cage.[4] As quoted in,[4][5] states that the piece “used radio transistors as a musical instrument. The transistors were interconnected thus influencing each other.”
In the late 1970s, as personal computers were becoming more available and affordable, groups like the League of Automatic Music Composers began to experiment with linking multiple computers, electronic instruments, and analog circuitry to create novel forms of music.[6]
The 1990s saw several important experiments in networked performance. In 1993, The University of Southern California Information Sciences Institute began experimenting with networked music performance over the Internet.[3]The Hub (band), which was formed by original members of The League of Automatic Composers, experimented in 1997 with sending MIDI data over ethernet to distributed locations.[6] However, “ it was more difficult than imagined to debug all of the software problems on each of the different machines with different operating systems and CPU speeds in different cities”.[6] As mentioned by,[3][7] describes a three-way audio-only performance in 1998 between musicians in Warsaw, Helsinki, and Oslo dubbed “Mélange à trois”. The early distributed performances all faced problems such as network delay, issues synchronizing signals, echo, and troubles with the acquisition of non-immersive audio and video acquisition and rendering.[3]
The development of high-speed internet over provisioned backbones, such as Internet2, made high quality audio links possible beginning in the early 2000s.[4] One of the first research groups to take advantage of the improved network performance was the SoundWIRE group at Stanford University's CCRMA.[8] That was soon followed by projects such as the Distributed Immersive Performance experiments,[3] SoundJack,[4] and DIAMOUSES.[2]
Awareness in Musical Performance
According to,[9] workspace awareness in a face-to-face situation is gathered through consequential communication, feedthrough, and intentional communication. A traditional music performance setting is an example of very tightly-coupled, synergistic collaboration in which participants have a high level of workspace awareness. “Each player must not only be conscious of his or her own part, but also of the parts of other musicians. The other musicians' gestures, facial expressions and bodily movements, as well as the sounds emitted by their instruments [are] clues to meanings and intentions of others”.[10] Research has indicated that musicians are also very sensitive to the acoustic response of the environment in which they are performing.[3] Ideally a networked music performance system would facilitate the high level of awareness that performers experience in a traditional performance setting.
Technical Issues in Networked Music Performance
As listed in,[11] bandwidth demand, latency sensitivity, and a strict requirement for audio stream synchronization are the factors that make networked music performance a challenging application. These factors are described in more detail below:
Bandwidth
High definition audio streaming, which is used to make a networked music performance as realistic as possible, is considered to be one of the most bandwidth demanding uses of today's networks.[11]
Latency
One of the major issues with networked music performance is that latency is introduced into the audio as it is processed by a participant's local system and sent across the network. For interaction in a networked music performance to feel natural, the latency generally must be kept below 30 milliseconds, the bound of human perception.[12] If there is too much delay in the system, it will make performance very difficult since musicians adjust their playing to coordinate the performance based on the sounds they hear created by other players.[1] However, the characteristics of the piece being played, the musicians, and the types of instruments used ultimately define the tolerance.[3] Synchronization cues may be used in a network music performance system that is designed for long latency situations.[1]
Audio Stream Synchronization
Both end systems and networks must synchronize multiple audio streams from separate locations to form a consistent presentation of the music.[11] This is a challenging problem for today's systems.
Objectives of a Networked Music Performance System
The objectives of a networked music performance can be summarized as:
- It should allow musicians and possibly audience members and/or a conductor to collaborate from remote locations
- It should create a realistic immersive virtual space for synchronous, interactive performance
- It should support workspace awareness that allows participants to be aware of the actions of others in the virtual workspace and facilitate all forms of communication
Current Research
SoundWIRE Research Group at CCRMA, Stanford University
The SoundWIRE research group explores several research areas in the use of networks for music performance including: multi-channel audio streaming, physical models and virtual acoustics, the sonification of network performance, psychoacoustics, and networked music performance practice.[7] The group has developed a software system, JackTrip, that supports multi-channel, high quality, uncompressed streaming audio for networked music performance over the internet.[7]
The Sonic Arts Research Centre (SARC), Queen’s University Belfast
SARC has been a major player in carrying out network performances since 2006 and has been active in the use of networks as both collaborative and performance tools. The network team at SARC is led by Prof Pedro Rebelo and Dr Franziska Schroeder with varying set-ups of performers, instruments and compositional strategies. A group of artists and researchers has emerged around this field of distributed creativity at SARC and this has helped create a broader knowledge base and focus for activities. As a result, since 2007 SARC has a dedicated team of staff and students with knowledge and experience of Network Performance, which SARC refers to as “Distributed Creativity”. Regular performances, workshops and collaborations with institutions such as SoundWire, CCRMA Stanford University, RPI, led by composer and performer Pauline Oliveros, as well as with the University of São Paulo have helped strengthen this emerging community of researchers and practitioners. Several research papers on the topic are listed in the Distributed Creativity wiki page.
Distributed Immersive Performance (DIP) Experiments
The Distributed Immersive Performance project is based at the Integrated Media Systems Center at the University of Southern California.[13] According to,[3] the Distributed Immersive Performance experiments explore the challenges of creating a seamless environment for remote, synchronous collaboration. The experiments use 3D audio with correct spatial sound localization as well as HD or DV video projected onto wide screen displays to create an immersive virtual space.[3] There are interaction sites set up at various locations on the University of Southern California campus and at several partner locations such as the New World Symphony in Miami Beach, Florida.[3]
DIAMOUSES
The DIAMOUSES project is coordinated by the Music Informatics Lab at the Technological Education Institution of Crete in Hellas.[14] It supports a wide range of networked music performance scenarios with a customizable platform that handles the broadcasting and synchronization of audio and video signals across a network.[2]
Wireless Music Studio (WeMUST)
The A3Lab team at Università Politecnica delle Marche conducts research on the use of the wireless medium for uncompressed audio networking in the NMP context.[15] A mix of open source software, ARM platforms and dedicated wireless equipment have been documented, especially for outdoor use, where buildings of historical importance or difficult environments (e.g. sea) can be explored for the performance. A premiere of the system have been conducted with musicians playing a Stockhausen composition on different boats over the coast in Ancona, Italy. The project also aims at shifting music computing from laptops to embedded devices.[16]
See also
- Internet band
- CSCW
- CELT and Opus codecs designed for these applications
- Computer Music
External links
- Research projects
- DIAMOUSES distributed interactive communication environment for live music performance
- Distributed Immersive Performance Project
- SoundWIRE Research Group at CCRMA
- Syneme Telemusic Studio at the Central Conservatory of Music in Beijing
- LOLA LOw LAtency audiovisual streaming system
- Artsmesh: Mac application front end for jack/jackrouter/jacktrip, syphon/ffmpeg, oscgroups, network tools, GNUSocial
- Research publications
- Open Source software
- Proprietary software
References
- 1 2 3 4 Lazzaro, J.;Wawrzynek, J. (2001). "A case for network musical performance". NOSSDAV '01: Proceedings of the 11th international workshop on Network and operating systems support for digital audio and video. ACM Press New York, NY, USA. pp. 157–166.
- 1 2 3 4 Alexandraki, C.; Koutlemanis, P.; Gasteratos, P.; Valsamakis, N.; Akoumianakis, D.; Milolidakis, G.; Vellis, G.; Kotsalis, D.; (2008). "Towards the implementation of a generic platform for networked music performance: The DIAMOUSES approach". EProceedings of the ICMC 2008 International Computer Music Conference (ICMC 2008). pp. 251–258.
- 1 2 3 4 5 6 7 8 9 10 11 Sawchuk, A.;Chew, E.; Zimmermann, R.;Papadopoulos,C.; Kyriakakis,C. (2003). "From remote media immersion to Distributed Immersive Performance". ETP '03: Proceedings of the 2003 ACM SIGMM workshop on Experiential telepresence. ACM Press New York, NY, USA. pp. 110–120.
- 1 2 3 4 Alexander, C; Renaud, A.; Rebelo, P. (2007). "Networked music performance: state of the art". AES 30th International Conference. Audio Engineering Society.
- ↑ Pritchett, J. (1993). The Music Of John Cage. Cambridge University Press, Cambridge, UK.
- 1 2 3 Bischoff, J.; Brown, C. "Crossfade". Retrieved 2009-11-26.
- 1 2 3 "SoundWIRE research group at CCRMA, Stanford University". Retrieved 2009-11-23.
- ↑ Chafe, C.; Wilson, S.; Leistikow, R.; Chisholm, D.; Scavone, G. (2000). "A simplified approach to high quality music and sound over IP". Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-00).
- ↑ Gutwin, C.; Greenberg, S. (2001). "The Importance of Awareness for Team Cognition in Distributed Collaboration". Dept Computer Science, University of Calgary, Alberta, Canada: 1–33.
- ↑ Malhotra, V. (1981). "The Social Accomplishment of Music in a Symphony Orchestra: A Phenomenological Analysis". Qualitative Sociology 4 (2): 102–125. doi:10.1007/bf00987214.
- 1 2 3 Gu, X.; Dick, M.; Noyer, U.; Wolf, L. (2004). "NMP - a new networked music performance system". Global Telecommunications Conference Workshops, 2004. GlobeCom Workshops 2004. IEEE. pp. 176–185.
- ↑ Kurtisi, Z; Gu, X.; Wolf, L. (2006). "Enabling network-centric music performance in wide-area networks". Communications of the ACM 49 (11): 52–54. doi:10.1145/1167838.1167862.
- ↑ "Distributed Immersive Performance". Retrieved 2009-11-23.
- ↑ "DIAMOUSES". Retrieved 2009-11-22.
- ↑ "A3Lab - WeMUST Research page". Retrieved 2015-02-24.
- ↑ Gabrielli, L; Bussolotto, M; Squartini, S; (2014). "Reducing the Latency in Live Music Transmission with the BeagleBoard xM Through Resampling". EDERC 2014, Milan, Italy. IEEE.