This article references a previous version of RX, and Music Rebalance has been improved in RX 8! Much of the information in this article still applies to Music Rebalance 2.0, but click here to read my newer article on Music Rebalance in RX 8.
When Music Rebalance in RX first came out, I remember thinking, “This’ll be very interesting.” The tool was first explained to me as a life-saver in post production. For example, I could lower the vocal within a background music track, thereby minimizing conflict with the predominant dialogue.
I don’t know about you, but my contrarian brain always yearns to thwart intended uses. So, I began to ruminate on how to push this processor as far as it could go: what if I didn’t just raise or lower a vocal part, but isolated an element completely, then processed it to extreme degrees? Sure it could add artifacts, but so what? What’s a little artifacting when it pushes the envelope of sound design?
I set about testing Music Rebalance along these lines, looking to see if I could fashion new feats of sound design, or solve heretofore impossible utilitarian problems. After experimenting, I kept my tricks to myself, until asked to expound on them for this article.
So, it is with pleasure that I present seven ways to utilize Music Rebalance as a music production and mixing tool. Let’s begin!
I first discovered this trick during one of my own recordings. I was testing out Antelope Audio's Edge Go microphone for review alongside Aston's Origin condenser mic in my home studio. In my tests, I recorded myself playing acoustic guitar and vocals simultaneously. I mic’d the vocals with the Antelope Edge Go and the guitar with the Origin.
Unfortunately, I sang a passage out of tune. Nevertheless, I found a way to tune the problematic area relatively easily. This kind of thing is often a nightmare, but Music Rebalance made it easy.
To give you an idea, this is the out-of-tune passage I was dealing with. Listen to the second occurrence of the word “say” on the following example:
Using Music Rebalance, I split the vocal track into two parts: vocal and everything else. By importing the same audio file into RX Audio Editor two separate times, I was able to perform different processing on each version of the file. You can see how I handled the endeavor in these two screenshots:
This left me with two distinct versions of the vocal track—one that only held music, and one that only held vocals. Pitch correction was then applied, but only to the vocal track.
The results sound like this:
However, I had to go one step further. The guitar mic also picked up vocals, so I had to repeat the process. I split the guitar track into its vocal and music components with Music Rebalance. Then, I had to pitch-correct the vocal with the exact same settings that I used previously. This was key, as it would save me from noticeable artifacts.
After that, I edited everything together in my DAW, and the results were far less pitchy. You can hear this on the second occurrence of the phrase “what the hell can I say.”
I found this to be a subtle way of handling the issue, and I bet you will too.
In productions, dealing with samples is par for the course. Here, Music Rebalance gives you a new way to process your samples, diving deeper into them than ever before. Perhaps you really want the drum break from a sample. With Music Rebalance, this is now possible.
Let’s take a quick look at the following audio sample. I'd like to grab the drum break:
I could make a crazy beat out of those drums—but I don’t want the music. However, I can isolate the drums, like so:
...and get this:
Tailoring drums to your desire is a matter of muting everything else and exporting the drums; now, I can do nearly anything I want with the results, building entirely new beats. When I was first sampling my own library of music, I often wanted just the drum break of a tune, or just the guitar part. Now, many years later, I can finally get it—thanks to Music Rebalance.
Along the same lines as the previous tip, you can use Music Rebalance to break out a sample loop into stems. Let’s say we have a loop, which I’ll take from the same song I broke out for the previous example:
For the vocal stem, I’d mute everything but the vocals.
I’d repeat the process with the same settings for each of the elements. It’s a bit labor-intensive, but it can be worth it. If I break this loop into four dedicated stems, their sum total sounds like this:
This doesn’t sound apparently different from the original. However, I can now manipulate them in any way I want. This includes processes that are not only creative, but utilitarian. Say, for instance, I don’t want to create a drastic remix. Instead I could do something quite simple, like lower the bass a smidge:
I could also re-pan or equalize specific mix elements, or shape the drums differently on the left and the right with the Transient Shaper in Neutron 3. This too could be my prerogative, and I could get something like this:
It is important to note that these stems won’t be perfectly isolated. Depending on how you’ve set the separation algorithms, you’ll either have a fair amount of leakage or a fair amount of artifacts. That’s why we’d never recommend breaking out a full mix into stems, whether for remixing or mastering purposes. This isn’t how Music Rebalance was designed to be used.
For diving into a loop or a sample, however, the technique is a viable way of obtaining the building blocks to fashion something new and creative.
When you isolate stems, you can go farther than standard, utilitarian processes. In fact, you can accomplish crazy feats of sound design. Say we isolate the vocal alone from the above example:
We could very well process this into something insane. Let’s start with a Gate to cut out extraneous noise.
Next I’ll use VocalSynth 2 to really mangle the heck out of it.
Now we top it all off with some reverb from Stratus by Exponential Audio:
The results are strange indeed:
Should we continue in that direction, we can make creative tapestries out of an initial loop, edging our results into the mix for flavors unachievable through other means.
Say your tune is built around a sample. The sample, however, doesn’t provide rich enough bass material for you to play with. You separate the bass content from the rest of the mix using Music Rebalance, and yes, you now have the notes—but you still don’t have a tone worth using. This happens all too often, and it can certainly be frustrating.
Luckily, there’s hope. Using this isolated bass information, you can extract a MIDI bassline from the sample. Different DAWs do this in different ways; it doesn’t matter if you use Flex Time in Logic Pro X or Ableton Live to do the job. All that matters is the end result: a MIDI bassline that can now reinforce the sample, much like drum augmentation might reinforce a badly recorded snare. Find the right synth to complement the original bass, and huzzah—you have something to work with!
I find this tip particularly exciting. One of the first pieces I wrote for iZotope centered around mixing live music. I spoke of how you have to work within the constraints of what you’re given. Now, with Music Rebalance, those restraints have gotten a bit looser.
Music Rebalance allows you to dive deeper into live multi-track recordings than you ever could previously. Consider the drums in a multi-track recording: you usually get a kick, snare, floor tom, and bleed from the other mics. If you’re lucky, and the venue is a little bit larger, you get rack toms. If you’re very lucky, you get cymbal mics and room mics—but you’re not always that lucky.
Typically, you have to create a sense of drum ambiance from the other mics in the room. Of course, the bleed has other instrumental information, so you must be sensitive to that in the mix. But now, the bleed can be marshaled and tailored by the Music Rebalance module.
A while back I was given a live recording with kick, snare, and low tom provided for the drums. However, the drummer also sang. His mic came in from his left side. The bass player, who stood to the drummer’s right, also sang. I essentially had built-in stereo ambiance!
When the drummer and the bass player didn’t sing, I had marvelous bleed; it almost felt like overheads. But when they sang, the bleed was obscured and as a result, the picture of the drums changed and the vocals were too loud.
I realized, however, that I could use Music Rebalance to ameliorate the situation. Observe the sound of the drummer’s mic when he wasn’t singing:
This is what it sounded like when he sang:
So I cut out the vocal, like so:
And that resulted in this track:
This was a track I could dial in, retaining something of my original drum picture while he continued to sing in his original track.
Clients often want a vocal-up or vocal-down mix for different destinations. Licensing might require a TV track, which is frequently instrumental, or instrumental with the chorus left intact. FiLMiC work makes use of a vocal-down mix so that the dialogue can be heard better over the music.
Clients can also need things incredibly quickly—much faster than it would take to recall a hybrid, analog/digital mix, or a straight-up console mix. Sometimes they only need a mockup of the vocal-up/vocal-down mix to show one specific music supervisor or network connection; they have an immediate opportunity to pitch, and they want to take advantage of it while the iron is hot.
With Music Rebalance, bumping the vocal up or down in volume is as easy as moving the sliders around, and the results are transparent enough to be deemed acceptable by mastering engineers such as Ian Shepherd.
Given time, it shouldn’t be your first choice—if circumstances allow you to go into your mix and move things around, that’s what you do. But if you don’t have that kind of time, it’s an acceptable alternative.
This powerful algorithm demands responsibility. We can get away with isolating vocal stems only if we’re creating weird tapestries out of them—for a mashup or a remix, we’d have to be so forgiving of the bleed that we’d have to label any result inherently experimental. We’d basically be saying, “take this glitchy track for what it is!”
Likewise, you’ll note no real mastering tips provided here, for it would be irresponsible to give them. Finalizing a paying client’s record is no place to fiddle with tech that sounds completely alien when pushed a smidge too far.
With that in mind, be sure to pay attention to the boundaries of this process. Not only is it responsible to do so, it’s also instructive. Within the boundaries, the possibility for utilitarian tweaking is greatly enhanced. Outside the boundaries, you can achieve sonically weird tapestries for your productions, or fashion creative solutions for problems.
On this latter point, you must always keep an ear out for sonic incongruity. An in-tune live recording with obvious artifacts is of no use; I’d rather hear something out of tune.
In other words, it’s all about maintaining balance, and keeping that balance musical. Hey, it’s right there in the name!