This article references previous versions of Ozone. Learn about the latest Ozone and its powerful new features like Master Rebalance, Low End Focus, and improved Tonal Balance Control by clicking here.
“You gotta help me man—my CPU overloads all the time!” I hear this a lot. So I have my response teed up: “Are you using your effects as sends, or as inserts?” And more often than not, they respond, “What’s the difference, man?”
An insert is like putting a plug-in directly on the track. With a send, you take the track, and you route a sort of copy of the track to an auxiliary channel, and put your effects there, on that channel.
When I’ve taken a peek at their session, I usually see the telltale culprit: twenty or so reverbs used as inserts, often with the same settings (one for the kick, one for the snare top, one for the snare bottom, etc).
For the audio newcomer, it’s not always obvious when to use an effect as an insert, and when to use it on a send. There are no hard and fast rules—this must be said outright. Still, it’s good to have guidelines, so you can better know what to do. Before we dive in, make sure you have a solid understanding of audio signal flow within a DAW.
To send or not to send? That is the question! Here are some answers:
Reverb, delay, compression, modulation, distortion—these are some effects that often wind up on auxiliary channels. You send some of your track to a reverb aux, and dial in as much as needed. But if the effect is meant to be used in a series and will receive further processing on the track level, it’s wiser to use the effect as insert even in parallel operation (as in, a chorus with a wet/dry control).
Say you have a background vocal that needs a bit of saturation, and you want to hear it in stereo, surrounding the main vocal. You might use a combination of stereo delay and stereo EQ to achieve this effect. Or perhaps as you work you find the distortion works better when it comes after the delay and EQ—the slightly darker left hits the saturation differently from the brighter right, for example.
We often use delay on an aux, in the context of a send, but here it’s better not to: the order of events is delay, stereo EQ, and then distortion. A different order of events would yield a different sound.
Pictured above is a chain of delay, EQ, and distortion. It sounds like this:
It certainly sounds different from this, with the delay bussed, after the rest of the processing:
The classic example for this principle is ‘New York style compression’, where you parallel-compress the kick and snare, sometimes with a bit of added low end and high end, to create the appearance of a louder track. In effect, the track now punches a bit more.
Here, it is better to send to an aux, and apply compression to that auxiliary track. Doing any extra work to the track itself might upset the balance of your mix, and might make the mix feel more clouded and constricted when all the instruments come in. The send, however, gives you the ability to ride the level of the parallel compression. Here are some interesting ways to use parallel processing in your next production.
Many plug-ins now sport the ability to mix compression right in the module itself for parallel effects. iZotope’s compressors are no exception. Still, I would advise sending to an aux track in this case to ride into the mix when needed, because it’s the overall sound of the entire plug-in chain you’d like to affect with compression. If you were to parallel-compress within the track, and then add another plug-in, you are now compressing into that new effect, which may not be what you originally intended.
Here we’re describing the typical uses of compression: to turn down overly loud signals in a musical way. Most of the time, this sort of compression should be handled the typical manner, with a plug-in used as in insert.
But while we’re here, let’s talk about what parallel compression can also often be: upwards compression, where you don’t lower the level of a loud source, but raise the level of a quieter one.
As famed mastering engineer Bob Katz noted, when you route a compressor in parallel, edging the compressed aux into the mix, the effect of that compressor is often masked during the loud parts of the song. It’s only when the original track dips in level that you can hear the compressor doing it’s thing. The process is, in some ways, the opposite of typical downwards-compression: while the range is still constricted, the quiet parts are now a bit louder.
But in typical compression, where we’re bringing peaks down, we don’t want the effect of bringing up the quieter material. That’s one reason to run this kind of compression as an insert.
For our purposes, modulation is an umbrella term for anything involving a tiny, fluid amount of frequency or time-based trickery. Modulation includes chorusing, phasing, flanging, and other processes, and has many applications. It can thicken, widen, pan (to some extent) and most importantly it can indicate. For example:
Put a certain type of phaser on a guitar, and you’re in the 70s.
Put a different kind of phaser on that same guitar, and you’re suddenly in alternative 90s land.
When employing modulation (like in Iris 2!) to indicate a genre or an era, I’d advise using it on an insert rather than as a send.
It’s a matter of two different goals. One goal is to develop a sound within an existing track. The other goal, more macro in its scope, is mix-based: to help a track stand out amongst many others.
With a phaser employed directly, we are making a bold statement about what kind of story we’re telling with the individual track. Something as dry and in your face as a punk bass might not need this bold modulating statement, but it still might need a phaser, a chorus, or some other piece of modulation. If you’re struggling to understand chorus, flangers and phasers in audio production, don’t worry, we have a guide for you!
Which brings us to our next tip:
Imagine a dry punk track, the kind where no sort of glammy, unnecessary processing should be audible. You’ve got two guitars blaring away in each speaker. The bass part, going up the middle, blends into the frequency range of the guitars, and gets a bit lost in the process (try these bass mixing tips if this is a common issue for you). You like the EQ you’ve got going on in the bass—it works with the drums, and besides, any other setting wouldn’t work with the genre.
Here you can send that bass to an aux track, apply some compression (strategically!), some EQ for emphasis in the high midrange, and then a bit of chorusing, phasing, or flanging (depending on the song).
The effect will be focused on the high-midrange frequencies, where that pick attack lies. Here we’ve added a process that modulates the frequency range we need to distinguish, a part that actively brings it out of alignment with the original track. The result is a kind of reinforcement.
Modulation can be used for such reinforcement, whether it’s chorus employed to thicken a vocal, or flanging to give an otherwise unmodulated lead guitar an edge for the solo.
It’s the sending that makes all the difference, as the modulation is meant to help the track stand out against the mix, or blend deeper into it, whatever the case may be. Putting it right on the track wouldn’t achieve the same effect—and it would get in the way of downstream processing, should you add more. Downstream processing might, in turn, change the timbre of what’s sent to the aux, but (usually) only if you make a drastic move.
Sometimes we want to put sounds in the same physical space. A couple of guitars, a vocal, and the drums could all go to the same reverb in order to create a live, coherent feeling—something like a stadium verb for example.
In this case, it makes sense to send these elements to an auxiliary, one with a reverb that helps them all cohere. The reverb can be EQ’d, distorted, or compressed however you like. The point is, all these elements need to feel like they’re coming from the same space—a space this reverb signals—so we send them all to the same reverb.
This is the most common way to use reverb (and the most common to make mistakes doing), and with good reason. For one, different reverbs, with slightly different settings, and all blended at varying amounts, is a surefire path to muddiness; one verb to rule them all is not only expedient, it also creates a clearer and more delineated sonic space. If you’re struggling to choose a reverb for your next production, this guide might help.
The other reason is CPU: Plug-ins eat the stuff up. Reverbs especially. If you have limited resources, it’s a far better investment to send elements to a verb rather than to use multiple modules.
And here we come to the flip of the coin: a mono guitar sound is sometimes made more believable, and more genre-specific, with the addition of a spring reverb. Do we send this verb to an auxiliary? I’d say usually not—especially if the track needs to retain its mono feel. Sure, we could send the spring to the opposing left or right channel, but that’s not what we’re talking about. We’re talking about a beautiful spring verb that’s missing from the recording itself!
This spring verb is so elemental to the communication of the guitar that it demands its usage an insert. A copy of the signal, effected with a verb, feels more far away from the original part. Using wet/dry balancing rather than auxiliary levels, in this case, can feel like an amp recorded with a spring reverb.
This is the difference between reinforcing an elemental part of the track and adding a little extra something, be it to add depth or cut (and yes, reverb can be used to cut, especially a controlled verb on a dry, hype-less snare).
One type of verb marries itself—enmeshes itself—into the aesthetic. The other brings something extra to the table, to heighten perception within the context of the mix itself.
Tom drums often benefit from using insert reverb (in this case, you route the toms to a submix and use reverb as an insert there). Sometimes, the whole drum buss can be made more exciting and more believable with this technique.
Reverb used in such a way, directly, often with an emphasis on early reflections, and blended in ever so slightly, can help the root, quintessential sound of the instrument feel more record-ready. Used in the right manner, insert-verb can even be receptive to downstream processing—it can sound wonderful if slightly compressed, or excited with harmonic distortion.
Sometimes you have an instance where a bit of an effect, say a delay, must exist on its own. However, it also must be fed into another effect (say a reverb) for artistic purposes. Usually, it’s to create a more involved, complicated feeling of space.
An arena, for instance, often displays slap echo and a large verb. Furthermore, the slap echo is often a part of the reverb. They are all multiple processes that must be balanced with excruciating detail. If we choose to use sends, we’re free to balance in a more exacting way. It could look like this:
Yes, it’s hard to know which situation calls for which technique, especially with elements like reverb and delay. And, even with guidelines and self-imposed rules, sometimes it sounds better to break them. Still, I like to ask myself this simple question before proceeding:
Am I trying to define the sound of a track, or am I trying to help it stand out against the mix?
They are both valid questions, but they tend to have different answers. If I’m trying to bolster the track, I go for the insert, and if I’m trying to bolster the mix, I go for the send. That’s the general principle, anyway. To send or not to send? The answer is, “it depends.” I hope that after reading this article, you’ll have a better idea of when to use which technique.