Digital waveguide synthesis

Digital waveguide synthesis is the synthesis of audio using a digital waveguide. Digital waveguides are efficient computational models for physical media through which acoustic waves propagate. For this reason, digital waveguides constitute a major part of most modern physical modeling synthesizers.

A lossless digital waveguide realizes the discrete form of d'Alembert's solution of the one-dimensional wave equation as the superposition of a right-going wave and a left-going wave,

y(m,n) = y^{+}(m-n) + y^{-}(m+n)

where y^{+} is the right-going wave and y^{-} is the left-going wave. It can be seen from this representation that sampling the function y at a given point m and time n merely involves summing two delayed copies of its traveling waves. These traveling waves will reflect at boundaries such as the suspension points of vibrating strings or the open or closed ends of tubes. Hence the waves travel along closed loops.

Digital waveguide models therefore comprise digital delay lines to represent the geometry of the waveguide which are closed by recursion, digital filters to represent the frequency-dependent losses and mild dispersion in the medium, and often non-linear elements. Losses incurred throughout the medium are generally consolidated so that they can be calculated once at the termination of a delay line, rather than many times throughout.

Waveguides such as acoustic tubes are three-dimensional, but because their lengths are often much greater than their cross-sectional area, it is reasonable and computationally efficient to model them as one-dimensional waveguides. Membranes, as used in drums, may be modeled using two-dimensional waveguide meshes, and reverberation in three-dimensional spaces may be modeled using three-dimensional meshes. Vibraphone bars, bells, singing bowls and other sounding solids (also called idiophones) can be modeled by a related method called banded waveguides where multiple band-limited digital waveguide elements are used to model the strongly dispersive behavior of waves in solids.

The term "digital waveguide synthesis" was coined by Julius O. Smith III who helped develop it and eventually filed the patent. It represents an extension of the Karplus–Strong algorithm. Stanford University owns the patent rights for digital waveguide synthesis and signed an agreement in 1989 to develop the technology with Yamaha.

An extension to DWG synthesis of strings made by Smith is commuted synthesis, wherein the excitation to the digital waveguide contains both string excitation and the body response of the instrument. This is possible because the digital waveguide is linear and makes it unnecessary to model the instrument body's resonances after synthesizing the string output, greatly reducing the number of computations required for a convincing resynthesis.

Prototype software implementations by Smith and colleagues were done in the Synthesis Toolkit (STK).[1][2]

The first musical use of digital waveguide synthesis was in the composition May All Your Children Be Acrobats (1981) by David A. Jaffe, followed by his Silicon Valley Breakdown (1982).

Licensees

References

Further reading

External links

This article is issued from Wikipedia - version of the Wednesday, February 25, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.