Using submixes to group tracks together into a smaller number of elements is an effective way of having more control over your mix. But once you know the basics of mix buses, how do you decide which submixes to create? Which instruments should you group together?
Ultimately, you will need to decide upon your own system of creating submixes, so in this article, we’ll share a few different approaches to creating submixes you can consider applying to your mixing workflow.
Jump to these sections:
A submix is a channel used to combine a group of audio signals that should be processed together—like all the low end elements of a mix routed to one bus channel. Aside from being an organizational technique that can help with managing large sessions, creating submixes can be an effective way of having greater control over the end result of your mix. By breaking your mix down to a few submixes you can streamline your mixing workflow with the ability to automate multiple track levels with one fader and process a whole group of related tracks simultaneously with a single effects chain to glue them together. To create a submix, you simply highlight all the applicable tracks you want to group together and assign their outputs to a free, stereo bus channel.
In the video above, learn how to use submix processing with Ozone to speed up your workflow and create a cohesive sound.
Our first method involves organizing your submixes by instrument type. You can create a submix for all the guitars, a group of vocals, or every element of your drums (kick, snare, hi-hat, etc.) and route them to the same bus channel so they can be processed by the same effect chain.
For example, once you have the level relationships set for every element of your drum set, it makes more sense to use one fader to adjust the drum’s overall level rather than having to adjust the kick, snare, hi-hat, toms, and cymbals individually every time. Not only will it take more time, but you risk changing the level relationships you’ve already dialed in.
The same idea applies to a group of guitar tracks or background vocals. Instead of adding reverb to every track individually, for example, you can add reverb to the entire group which can help glue different tracks together and help reduce your CPU load.
This approach can simplify your mixing session and make it more manageable. Instead of having to blend every element in the mix:
Guitar Left, Guitar Right, Background Vocals Left, Background Vocals Right, Kick, Snare, Hi-hat, cymbals, toms, piano, synths, organ, bass, sub, etc.
You can route them to their corresponding submixes and work with a smaller number of elements:
This technique is named and pioneered by Michael Brauer, the famous engineer who’s worked with Coldplay, Aerosmith, John Mayer, Bob Dylan, Aretha Franklin, and more.
Michael Brauer’s multibus method originated as a solution to the limitations of stereo bus compression, a common practice meant to glue different elements of a mix together and shape the tone of the overall sound. The problem with it however, is that a compressor on the main stereo bus reacts to a wide range of audio signals it receives which can lead to some frequencies being compressed more than others. Brauer realized this limitation while working on an Aretha Franklin mix. When asked to push the level of the bass, the vocals would suddenly drop in level as the stereo bus compressor reacted to more bass in the mix. When he tried to compensate for this drop by raising the level of the vocals, the level of the bass would drop in return. This required a new approach to mixing that would allow for greater control of dynamics and loudness while minimizing problems on the stereo bus.
The Michael Brauer method involves sending subgroups of instruments, except vocals, to four submixes labeled A, B, C, and D, all processed with their own unique compression settings. He figured if he could split a mix into multiple stereo bus groups, he could have greater control over the end result because this mix bus structure would prevent any single instrument group from negatively impacting the dynamics of another in the main mix bus. This method allows for each bus to be processed with compressor settings that complement the input material while allowing for greater control over the dynamics, movement, and tone of the mix.
This is how the ABCD multibus technique is organized:
Bus A: all instruments that occupy the upper midrange (keys, synths, percussion, SFX, etc.)
Bus B: all foundational elements of a song (drums, bass, cello, low percussion, etc.)
Bus C: elements in the midrange that require a lot of automation (guitars, piano, etc.)
Bus D: slow attack instruments that would benefit from widening or soft tube compression (background vocals, ambient pads, strings, etc.)
In addition to these four mixbuses, the vocals would be routed to five aux channels, each with a unique compressor. The compressed and uncompressed signals of the vocals would then be blended together to shape their tone over the course of the song.
The benefits of this method include a finer—but not too fine—point of control when balancing elements; you need not hunt around for submixes when there’s only four of them. It’s a good balance between the chaotic ramble of no submixes and the confusing panoply of too many submixes.
In this technique, you might see an SSL-style compressor kissing the needle on the drum bus with a slow attack and a medium-to-fast release (or auto release, depending on the song); you might also see, on the music bus, some broad-strokes EQ, and in some cases, a little widening to move elements away from the vocal. Compression is usually employed throughout the four buses.
With this technique, however, you may run into problems when it comes to the bass. Some engineers experience trouble when low end information is split between different bus channels. Inevitably, the lows will fight some element in either bus, and you can find yourself wishing the bass had its own channel.
Another way to organize elements in your subgroups is to categorize them by the function they serve in your mix as opposed to frequency content or type of instrument, as with the previous two examples. The idea behind this approach is that some instruments take up multiple frequency bands (e.g., piano, guitar, drums) and even different instances of the same instrument can serve different functions.
Many low end elements, for instance, sport high-midrange for snap (think of the attack of an electric bass) and instruments like guitars can fall below the 100 Hz range, yet still be harmonic, rather than bass material. Vocals can be pads, melodies can be chordal…indeed, you begin to realize that frequency content has little to do it with: it’s all about function.
To create this submix structure, you need to identify the function of different elements of your mix. Maybe the bass feels anchoring; percussion feels propulsive; harmonic information feels supportive; melodies feel ear-catching, pads feel lilting; effects feel any manner of ways—disorienting, swimmy, or a whole host of adjectives. Regardless of what you choose, make sure the functions you choose are unique and don’t conflict with each other.
You can find your own adjectives, your own connections. The importance, of course, lies in making them. For the purposes of illustrating this concept, here’s an example of a submix configuration with function in mind. It consists of six building blocks of a mix: percussion, bass, harmonic information, pads, melody, and effects.
A percussion bus would include any instrument whose rhythmical function:
- Drives the feel of the music
- Supersedes its melody
A classic example would be the drums: while they might be tuned to the key of the song, their root note isn’t their primary purpose (not usually, anyway). Instead, drums frequently provide propulsion. A more counterintuitive example might be a scratchy, funky guitar part. In the mix, its tonal content might be lost behind a wall of sound, but that sixteenth note picking rushes the song along like a well-played shaker.
We could consider bass to be any instrument that hangs at around 100 Hz and below. But it’s a bit more complicated; indeed, a more apt spelling would be “base,” as this is the anchor of the song—and without it, you’re adrift.
Examples include the acoustic bass and the electric bass, but a jazz trio’s Hammond organ also centers the arrangement, as does the 808 kick drum in a hip-hop mix.
This category holds any non-stagnant generator of harmony. What do I mean? Think percussion in reverse: an instrument whose harmonies:
Drive the chordal motion of the song
Supersede any innate rhythmic content
Notice I wrote “supersede” and not “obliterate,” because there is a gray area. And in this gray area, rhythm does come into play. The composer Bach influenced Western harmony not through block chords, but through contrapuntal, rhythmic melodies—a violin line moving against a cello figure, for example. When taken in totem, these solo instruments create overarching harmonic movement. Thus, a contrapuntal string quartet can serve the same function as a strummed acoustic guitar, depending on the arrangement.
Here we also have harmonic content, but its rhythm is not of much importance. Background vocals holding whole-note “oooh” come to mind, as does a synth swell that boasts no arpeggiation or sequencing. A Hammond organ can often hang like a pad.
Any sound or instrument whose performance you’re guaranteed to remember. The hook, the lead, whatever you want to call it—this element is the standout. It can be your singer, your rapper, your saxophone, your soli of strings, and more. It's the centerpiece, holding all mnemonic value; if it grabs the attention and rings in your brain, it qualifies.
These include your delays, your reverbs, and what have you, but also encompass stingers, risers, and sound effects. They can often be “exclamations”—things that stab the mix in the night.
Think of them as emotional indicators, subtle or otherwise: Elvis Costello singing solo over an electric guitar is often more aggressive than Jeff Buckley doing so, and this can be a function not only of timbre, but of the halls and rooms into which they’ve been engineered.
These days, with practically infinite busing in DAWs, limiting oneself to four buses feels more like a Spartan exercise than a necessity. Many engineers have evolved their own submix templates over the years, often changing their workflow depending on the project. For example, if you’re working in post production, you can use a submix structure quite different from a musical approach. In post, you might have buses for:
Your submix structure for music can evolve over time as well. For example, you can run your submixes in the following manner:
Special effects (risers, etc)
Other Effects (reverb, delay, etc)
This type of template can give you dedicated control over different elements of the mix, a simplified way to gain-stage, the choice to compress certain elements, and the ability to use others as simple global faders. Also, automating becomes much easier with these buses.
Or this permutation which lays out the effects submixes alongside their corresponding instruments, so they can be ganged and muted easily. Like so:
In this case, all the effects buses would be routed to “All effects.” Yet they’d be grouped with their corresponding instrument buses so they can be muted and soloed together.
So if you want to mute all the vocals and vocal effects, you can do so quickly without muting the drum effects.
Each arrangement of submixes has its benefits and drawbacks, so you can decide which structure to use based on the number of elements in the song, how ambient it might be (and how many effects channels the mix would need), or simply by intuition. The sky's the limit here.
You can also use submixes for creative effect. For example, this is a template you can use for live mixes, to reconstruct the feeling of a live venue:
Behind your head effects
This is a psychological enterprise more than anything else, but it works if you’re mixing with psychoacoustic principles in mind.
For instance, you can artificially recreate the proximity effect with the proper use of EQ and levels. By lowering the level of the “background effects” submix (i.e., elements that “linger” at the back of the stage) and rolling off the lows and highs, you can make them appear farther back in the mix. Meanwhile, you can make the “foreground effects” submix appear closer to the listener by retaining more low and high frequencies as would an audio source that is positioned closer to the listener.
The “behind your head” (sound waves bouncing off the walls of a venue) submix can be used as a widening trick (flip the left and right channel and put the right out of phase) with a little bit of reverb, all mixed subliminally low.
This submix configuration is focused making it easier to create depth. So, for example, if you want to have the feeling of the guitarist’s stadium-solo soaring over the listener’s head, you can automate the reverb or delay to swim between the medium-ground and foreground submixes, and throw some of it to the “behind your head” submix as well.
The method may or may not work for you. The point is to open your mind to the possibilities of what submixes can do for you, if you allow yourself to think of them as the solution to a puzzle.
Streamline your workflow with submixes
Do you have to use submixes? Of course not! Some mixes would absolutely fall apart under all this comp-pressure! But their benefits cannot be overlooked. In addition to processing, the functional aspects are quite powerful: automating whole groups of instruments at a time on a single fader can do wonders for creating a cohesive mix, improving your mixing workflow, and reducing your CPU load. However you decide to proceed with submixes is ultimately your decision. But we hope this article has given you some insights into how they work, when they work, and some creative ways to use them.