Audio mixing (recorded music)
In sound recording and reproduction, audio mixing (or “mix down”) is the process which commences after all tracks are recorded (often, tracked) and edited as individual parts. The mixing-process can consist of various processes but are not limited to setting levels, setting equalization, using stereo panning, and the addition of effects. The way the song is mixed has as much impact on the way it sounds as each of the individual parts that have been recorded. Dramatic impacts on how the song affects the listeners can be created by minor adjustments in the relationship among the various instruments within the song.[1]
Audio mixing is utilized as part of creating an album or single. Mixing is largely dependent on both the arrangement and the recordings.[2] The mixing stage often follows a multitrack recording. The process is generally carried out by a mixing engineer, though sometimes it is the musical producer, or even the artist, who mixes the recorded material. After mixing, a mastering engineer prepares the final product for reproduction on a CD, for radio, or otherwise.
Prior to the emergence of digital audio workstations (DAWs), the process of mixing used to be carried out on a mixing console. Currently, more and more engineers and independent artists are using a personal computer for the process. Mixing consoles still play a large part in the recording process. They are often used in conjunction with a DAW, although the DAW may only be used as a multitrack recorder and for editing or sequencing, with the actual mixing being performed on the console.
The role of audio mixing
In its simplest form an audio mixer combines several incoming signals into a single output signal, however this is not as simple as connecting the input signals in parallel and sending them to a single output signal because they could influence each other.[3] In order to combine different signals, they must be mixed first so that each signal has a relationship of hierarchy (each signal's volume one step below the next). The role of a music producer is not necessarily a technical one, with the physical aspects of recording being assumed by the audio engineer, and so producers often leave the similarly technical mixing process to a specialist audio mixer. Even producers with a technical background may prefer that a mixer comes in to take care of the final stage of the production process. Noted producer and mixer Joe Chiccarelli has said that it is often better for a project that an outside person comes in because:
"when you're spending months on a project you get so mired in the detail that you can't bring all the enthusiasm to the final [mixing] stage that you'd like. [You] need somebody else to take over those responsibilities so that you can sit back and regain your objectivity."[4]
However, as Chiccarelli explains, sometimes limited budgets dictate that a producer takes care of the mixing as well.[4]
History
Early recording machines
Edison and Berliner first developed recording machines in the last years of the nineteenth century with little or no electrical apparatus. The recording and reproduction process itself was completely mechanical – the system utilizing a small horn terminated in a stretched, flexible diaphragm attached to a stylus which cut a groove of varying depth into the malleable tin foil on Edison’s “phonograph” cylinder or of varying lateral deviation in the wax on Berliner’s gramphone disc.[5]
Electronic recording became more widely used during the 1920s. It was based on the principles of electromagnetic transduction. The possibility for a microphone to be connected remotely to a recording machine meant that microphones could be positioned in more suitable places, connected by wires to a complementary transducer at the other end of the wire, which drove the stylus to cut the disc. Even more useful was the fact that the outputs of the microphones could be mixed together before being fed to the disc cutter, allowing greater flexibility in the balance.[6]
Before the introduction of multitrack recording, all the sounds and effects that were to be part of a recording were mixed together at one time during a live performance. If the recorded blend (or mix, as it is called) wasn't satisfactory, or if one musician made a mistake, the selection had to be performed over until the desired balance and performance was obtained. However, with the introduction of multitrack recording, the production phase of a modern recording has radically changed into one that generally involves three stages: recording, overdubbing, and mixdown.[7]
Mixing as we know it today emerged with the introduction of commercial multitrack tape machines, most notably the 8-track recorders that were introduced during the 1960s. The ability to record sounds into a multitude of channels meant that treating these sounds can be postponed to a later stage – the mixing stage.
In the 1980s, home recording and mixing began to take market share from recording studios. The 4-track Portastudio was introduced in 1979. Using one, Bruce Springsteen released the album Nebraska in 1982. The Eurythmics topped the charts in 1983 with the song "Sweet Dreams (Are Made of This)", recorded by bandmember Dave Stewart on a makeshift 8-track recorder.[8] In the mid-to-late 1990s, computers replaced tape-based recording for most home studios, with the Power Macintosh proving popular.[9] At the same time, digital audio workstations (DAW), first used in the mid-1980s, began to replace tape in many professional recording studios.
Equipment
Mixing Consoles
A mixer, or mixing console, or mixing desk, or mixing board, or software mixer is the operational heart of the mixing process.[10] Mixers offer a multitude of inputs, each is fed by a track from a multitrack recorder. Mixers typically have 2 main outputs (in the case of two-channel stereo mixing) or 8 (in the case of surround).
Mixers offer three main functionalities:[10][11]
- Mixing – summing signals together, which is normally done by a dedicated summing amplifier or in the case of digital by a simple algorithm.
- Routing – allows the routing of source signals to internal buses or external processing units and effects.
- Processing – many mixers also offer on-board processors, like equalizers and compressors.
Mixing consoles used for dubbing are often large and intimidating, with exceptional amount of controls. These controls luckily consist of a great deal of duplication among them, so by studying just one area of a console, one learns nearly all of the areas. The mixing console can at the end of the day be broken down into two ingredients, processing and configuration. Sound processes are the devices that is used to manipulate the sound, all the way from simple internal level controls to sophisticated outboard reverberation units whereas the configuration issues consist out the signal routing from the input to the output of the console through the various processes.[12]:172
Digital Audio Workstations (DAW’s) today have many mixing features which potentially have more processes available to that of a major console. The distinction between DAW’s equipped with a control surface and large consoles is usually that, if the console is digital, it will consist of dedicated digital signal processors for each channel and is thus designed not to “overload” under the burden of signal processing and which may possibly crash or lose signals. DAW’s can dynamically assign resources like digital audio signal processing power and so could run out if many signal processes were in simultaneous use. The upside of this is that this can be solved fairly easily by just plugging in more hardware into the DAW, but the downside would be that the cost of this endeavour may approach that of a major console.[12]:173
Outboard gear and plugins
Outboard gear (analog) and software plugins (digital) can be inserted to the signal path in order to extend processing possibilities. Outboard gear and plugins fall into two main categories:[10][11]
- Processors – these devices are normally connected in series to the signal path, so the input signal is replaced with the processed signal (e.g., equalizers).
- Effects – while an effect can be considered as any unit that affects the signal, the term is mostly used to describe units that are connected in parallel to the signal path and therefore they add to the existing sounds, but do not replace them. Examples would include reverb and delay.
Multiple Level Controls in Signal Path
A single signal can pass through a large number of level controls – such as an individual channel fader, subgroup master fader, master fader and monitor volume control. Problems are created according to Holman due to the multiplicity of the controls. Each and every console has their own dynamic range – so it is important to utilize this correctly in order to avoid excessive noise or distortions. Attacking this problem – of the correct setting of the variety of controls can be accomplished relatively easily. Holman refers to the scale of the control as a clue for the solution of this problem. With 0 dB being the nominal setting of the controls, many have a “gain in hand,” which goes above 0 dB. This means that one can turn it up from the nominal setting to have something that sounds clear. As for other controls such as sub masters and master level controls are used for slight trims to the overall section by section balance or for the main fade-ins and fade-outs of the overall mix. [12]:174
Processes that Affect Audio Levels
- Faders – used to attenuate or boost the level of signals.
- Pan pots – A fundamental part of configuration in recording console is panning. Pan pots are devices that place sound among the channels: L, C, R, LS, and RS.[12]:174 They are also used to pan signals to the left or right and in surround, to the back or front.
- Compressors – Every track has a volume range. The moment when tracks are combined in mixing, a problem of unintentional masking of one signal by another arises. For example: we have a dialog recording with a volume range and a music recording with its own volume range. The problem is that although most of the time the music will lie underneath the dialog, at a certain point in time the peaks of the music may correspond to the minimum level of the dialog and the dialog will be obscured.[12]:174
To solve this problem we could decrease the gain level of the music during its higher level passages and increase it during its softer ones to maintain a more even level behind the dialog – but this process will be time consuming. This process can easily be automated by utilizing a device called a compressor – this device is equipped with a number of controls to vary the volume range over which the action of the compression occurs, the amount of the compression, and how fast or slow the compressor acts.[12]:175
- Expansion – The Expansion device does exactly the opposite of what the compressor does. It increases the volume range of a source and may do so across a wide dynamic range or may be restricted to a narrower region by control functions. Restricting expansion to only low-level sounds helps to minimize noise. This function is often referred to as downward expansion, noise gating, or keying and turns the level down below a threshold set by a specific control. Noise gates have numerous audible problems. If for instance one has a dialog recording with air conditioning noise in it – the threshold of the noise gate can distinguish between the dialog and the air conditioner because the air conditioning noise is lower in level that the dialog. The problem arises when the air conditioning noise is absent between lines, yet it returns behind the dialog. This exaggerated difference would likely be much more noticeable than if the audio had been left unprocessed [12]:176
- Limitors – A limiter acts on signals above a certain threshold. Above that threshold, the level is controlled so that for each dB of increase on the input, the gain is reduced by the same amount. Therefore the output level above the threshold will stay exactly the same, regardless of any increases in the input level. Limiters can be used to catch occasional events that might not otherwise be controlled, to bring them into a range in which the recording medium can handle the signal linearly.[12]:176
These items discussed thus far affect the level of audio signal. The most commonly used process is level control, which is used even on the simplest of mixers.[12]:177
Processes that Affect the Frequency Response
Processes that primarily affect the frequency response of the signal are generally seen as second in importance to level control. These processes clean the audio signal, enhance interchangeability between other signals, adjust for the loudness effect, and generally create a much more pleasant or deliberately worse sound. There are two principle frequency response processes – equalization and filtering.[12]:177
- Equalizers – The simplest description of EQ is the process of altering the frequency response in a manner similar to what tone controls do on a stereo system. Professional EQ’s dissect the audio spectrum into three or four parts which may be called the low-bass, mid-bass, mid-treble, and high frequency controls.[12]:178
- Filters – Filters are used to essentially eliminate certain frequencies from the output. Filters strip off the any part of the audio spectrum. There are various types of filters:
High-pass filter (low-cut): used to remove excessive room noise at low frequencies. Low-pass filter (high-cut): used to help isolate a low frequency instrument playing in a studio along with others. Band-pass filter: a combination of high- and low-pass filters, also known as a telephone filter – because a sound lacking in high and low frequencies resembles the quality of sound transmitted and received by telephone.[13]
Processes that Affect the Time domain
- Reverbs – Reverbs are used to simulate boundary reflections created in a real room, adding a sense of space and depth to otherwise 'dry' recordings. Another use is to help distinguish among auditory objects; all sound having one reverberant character will be categorized together by human hearing in a process called auditory streaming. This is an important feature in layering sound in depth from in front of the speaker to behind it.[12]:181
For example: Before the advent of electronic reverb and echo processing, somewhat more basic, ‘physical’ means were used to generate the effects. An echo chamber, a large reverberant room equipped with a speaker and at least two spaced microphones. Signals was sent to the speaker and the reverb generated in the room was picked up by the two microphones which constituted a “stereo return”.[13]
Downmixing
Downmixing is making a stereo mix from a 5.1 surround mix. It is done in the user’s home theatre receiver. In the down-mixing circuit, the left and right surround channels are blended with the left and right front channels. The centre channel is blended equally with the left and right channels. The LFE channel is either mixed with the front signals or not used. Downmixes made this way seldom create a well-balanced stereo mix, so be sure to check the 5.1 mix for stereo compatibility. Surround monitoring systems should have a downmix button so one can hear how surround mixes will sound when down-mixed to stereo by consumer receivers. It is best to do a separate stereo mix and record it on tracks 7 and 8 of an eight-track mix-down recorder. This stereo mix can be put on DVDAudio discs or Super Audio CDs along with the surround mix.[14]
Consumer electronics may also downmix automatically. For example, a DVD player or sound card may downmix a surround sound signal (four or more channels) to stereophonic sound (two channels) for playback through two speakers.
Mixing in surround
Any device having a number of multiple bus consoles (typically having eight or more buses) can be used to create a surround-sound mix, but the important question of the day is this: How easily can signals be routed, panned and affected in a surround environment to create a 5.1 mix without going nuts with frustration?
Whether you’re working in an analog hardware, digital hardware, or DAW “in-the-box” mixing environment, the ability to pan mono or stereo sources into a surround soundscape, place effects in the 5.1 scape and monitor multiple output formats without difficulty can make the difference between a difficult, compromised mix and one that lifts your spirits.[15]
Mixing in surround is very similar to mixing in stereo except that there are more speakers, placed to "surround" the listener. In addition to the horizontal panoramic options available in stereo, mixing in surround lets the mix engineer pan sources within a much wider and more enveloping environment. In a surround mix, sounds can appear to originate from many more or almost any direction depending on the number of speakers used, their placement and how audio is processed.
There are two common ways to approach mixing in surround:
- Expanded Stereo – With this approach, the mix will still sound very much like an ordinary stereo mix. Most of the sources such as the instruments of a band, the vocals, and so on, will still be panned between the left and right speakers, but lower levels might also be sent to the rear speakers in order to create a wider stereo image, while lead sources such as the main vocal might be sent to the center speaker. Additionally, reverb and delay effects will often be sent to the rear speakers to create a more realistic sense of being in a real acoustic space. In the case of mixing a live recording that was performed in front of an audience, signals recorded by microphones aimed at, or placed among the audience will also often be sent to the rear speakers to make the listener feel as if he or she is actually a part of the audience.
- Complete Surround/All speakers are treated equally – Instead of following the traditional ways of mixing in stereo, this much more liberal approach lets the mix engineer do anything he or she wants. Instruments can appear to originate from anywhere, or even spin around the listener. When done appropriately and with taste, interesting sonic experiences can be achieved, as was the case with James Guthrie's 5.1 mix of Pink Floyd's The Dark Side of the Moon, albeit with input from the band.[16] This is a much different mix from the 1970s quadrophonic mix.
Naturally, these two approaches can be combined any way the mix engineer sees fit. Recently, a third approach, or method of mixing in surround was developed by surround mix engineer Unne Liljeblad.
- MSS – Multi Stereo Surround[17] – This approach treats the speakers in a surround sound system as a multitude of stereo pairs. For example, a stereo recording of a piano, created using two microphones in an ORTF configuration, might have its left channel sent to the left rear speaker and its right channel sent to the center speaker. The piano might also be sent to a reverb having its left and right outputs sent to the left front speaker and right rear speaker, respectively. Additional elements of the song, such as an acoustic guitar recorded in stereo, might have its left and right channels sent to a different stereo pair such as the left front speaker and the right rear speaker with its reverb returning to yet another stereo pair, the left rear speaker and the center speaker. Thus, multiple clean stereo recordings surround the listener without the smearing comb filtering effects that often occur when the same or similar sources are sent to multiple speakers.
References
- ↑ Strong, Jeff (2009). Home Recording For Musicians For Dummies (Third ed.). Indianapolis, Indiana: Wiley Publishing, Inc. p. 249.
- ↑ Hepworth-Sawyrr, Russ (2009). From Demo to Delivery. The production process. Oxford, United Kingdom: Focal Press. p. 109.
- ↑ Rumsey, Francis; McCormick, Tim (2009). Sound and Recording (6 ed.). Linacre House, Jordan Hill, Oxford: Elsevier Ltd. p. 109. ISBN 978-0-240-52163-3.
- 1 2 "Interview with Joe Chiccarelli". HitQuarters. 14 June 2010. Retrieved Sep 3, 2010.
- ↑ Rumsey, Francis; McCormick, Tim (2009). Sound and Recording (6th ed.). Oxford, United Kingdom: Elsevier Inc. p. 168. ISBN 978-0-240-52163-3.
- ↑ Rumsey, Francis; McCormick, Tim (2009). Sound and Recording (6th ed.). Oxford, United Kingdom: Elsevier Inc. p. 169. ISBN 978-0-240-52163-3.
- ↑ Huber, David Miles (2001). Modern Recording Techniques. Focal Press. p. 321. ISBN 0240804562.
- ↑ "Eurythmics: Biography". Artist Directory. Rolling Stone. 2010. Retrieved March 20, 2010.
- ↑ "Studio Recording Software: Personal And Project Audio Adventures". studiorecordingsoftware101.com. 2008. Retrieved March 20, 2010.
- 1 2 3 White, Paul (2003). Creative Recording (2nd ed.). Sanctuary Publishing. p. 335. ISBN 1-86074-456-7.
- 1 2 Izhaki, Roey (2008). Mixing Audio. Focal Press. p. 566. ISBN 978-0-240-52068-1.
- 1 2 3 4 5 6 7 8 9 10 11 12 Holman, Tomlinson (2010). Sound for Film and Television (3rd ed.). Oxford, United Kingdom: Elsevier Inc. ISBN 978-0-240-81330-1.
- 1 2 Rumsey, Francis; McCormick, Tim (2009). Sound and Recording (6th ed.). Oxford, United Kingdom: Elsevier Inc. p. 390. ISBN 978-0-240-52163-3.
- ↑ Bartlett, Bruce; Bartlett, Jenny (2009). Practical Recording Techniques (5th ed.). Oxford, United Kingdom: Focal Press. p. 484. ISBN 978-0-240-81144-4.
- ↑ Huber, David Miles; Runstein, Robert (2010). Modern Recording Techniques (7th ed.). Oxford, United Kingdom: Focal Press. p. 559. ISBN 978-0-240-81069-0.
- ↑ http://www.digitalbits.com/reviewsdvdasacd/pinkfloyddarksidesacd.html
- ↑ "Surround Sound Mixing". www.mix-engineer.com. Retrieved 2010-01-12.
External links
|