4 Mastering Techniques that Shouldn’t Have Worked But Did

by Nick Messitte, iZotope contributor

August 30, 2017

O8S-Mothership-EQ

Ozone 8 Advanced Equalizer

Before we begin, a word of caution: the name of this article is not "How to Master a Hit Record," "The Perfect Mastering Chain for Every Tune," or any other such title. Mastering, in my experience, is not a cookie-cutter enterprise. Some songs need only a coat of polish. Some might need light sculpting, a “tummy tuck” if you will. Other tunes, however, require laparoscopic surgery—the sonic equivalent of excising a malignancy, with constant vigilance paid to leaving surrounding tissue unharmed.

Think of the following tips as experimental treatments: it's a miracle they worked at all. Most of the methods I’m about to share are not advised for everyday affairs. But sometimes, when faced with unconventional problems, unconventional solutions turn out to be the best choices; certainly that was the case in what you’re about to read.

Indeed, if there’s a second theme to this article, it is to encourage creativity in the face of challenges, adventure in the face of daunting tasks—but never degradation in the face of fidelity. That is where we draw the line.

It can be argued that there is danger in some of these (in-the-box) mastering techniques. If pushed too far, they might tip the balance past compromise into the land of degradation (hence the “shouldn’t have worked but did” part of the title). But you’ll find danger in the conventional tools as well. I’m thinking of over-compression and the like. If a tune requires a conventional technique, we don’t shy away from it because of the potential to abuse the process. The same logic applies here: drastic situations call for drastic measures, but the key is employing them tastefully, and doing whatever you can to keep your perspective.

Okay, enough preamble. Let’s get to it!

1. Equalize the Side Channel of an M/S Signal in Stereo

I was recently handed a mix where the guitar and vocal were masking each other, even though the guitar sat hard-right, while the vocal sat in the middle (i.e., was distributed equally in both channels). I ended up automating an EQ cut in the side channel for the offending section, and the client was happy, but the tune kept gnawing at me, for I hadn’t come up with a perfect solution.

That problematic guitar? It only sat hard right. Yet here I was, hamstrung! I couldn’t use a stereo EQ in conventional left/right mode, for then I would directly impact the vocal. But the fix I chose—cutting a decibel of 670 Hz with an equalizer in mid/side—left me unsatisfied, as it affected that poor, hapless guitar in the left channel.

An answer popped into my noggin as I was trying to sleep (yes, these things do keep me up at night). But contemplating all the possible negative effects, I felt too timid to try it. The fix seemed needlessly ornate, so much so that it couldn’t actually be worth it. But I gave it a shot the next day, for I had nothing better to do, and I was genuinely curious.

I tweaked the right portion of a stereo EQ on the side (or “difference”) channel of a signal already split into M/S. To my surprise, it worked!

You can set this routing up with any stereo EQ switchable between left/right and mid/side, but to demonstrate the setup for this article, I'll be using the EQ in Ozone.

First, you duplicate your source track. Label one track “MIDDLE,” and the other “SIDES.” Make sure you buss them to the same auxiliary track and implement any further downstream processing, ITB and OTB alike, using this aux.

Both tracks will get the equalizer, but on the track labeled "MIDDLE," use the EQ to solo the middle (or mute the sides). You can see how that's done here:

M_S_S Image 1

Next, do the opposite on the “SIDES” duplicate—solo the sides:

M_S_S Image 2

At this point, these tracks should pass the null test with an unprocessed, polarity-inverted copy. If you don't hear total silence when implementing the null test, then you have a problem. A delay is being introduced, either by the plug-in or the DAW, and the chain is compromised; you’ll have troubleshooting ahead of you.

But assuming you pass the null test, you're now set for your final move: instantiate a third EQ on the track marked “SIDES” channel, but this time, run it in left/right mode.

M_S_S Image 3

Once I had set up a routing scheme like this (mine was slightly different; I'll explain later), I could introduce a cut on the right side of the side channel at 670 Hz.

Almost instantly, I was taken aback by how simply this solved my very specific problem. The ease with which I was able to grab hold of the offending instrument and simply move it out of the way of the vocal was downright eerie. I waited a day, listened again, showed the results to some people I trust, and remained enthused. Indeed, I was so happy that I remastered the tune (with this slight tweak feeding almost the same chain) and sent the file to the client. Good thing they hadn't released the tune yet, because they liked this version of the master better than the previous one.

Again, this is not a solution I would implement on a day-to-day basis. The problems I contemplated were real: with bad implementation, I could have done serious harm to the stereo image, introduced unwanted smearing, or, if the EQ was linear-phase, caused audible pre-ringing. But with judicious use (I only cut a decibel, and I automated that cut out when the section of the song ended), it worked in this specific instance. It might for you too.

2. Try Parallel Processing for More than Compression

Before I knew what mastering was, I would employ parallel processing on the stereo tracks of my friends and colleagues (mostly while polishing up their demos). However, this wasn’t your run-of-the-mill parallel compression; this was frequency emphasis, gating, and sometimes even upwards expansion employed in a most esoteric manner, though I didn’t know it at the time.

Now that I get paid to master every once in awhile, I still engage in parallel processing—though I like to think I've sharpened my ears enough to know when I might cause harm. Because make no mistake, this additive process, if deployed incorrectly, can quickly mess up your master.

If I'm working on a track with an element that needs emphasis (kick, snare, what have you), I may not always go for an EQ; it might impact the mix too much. Instead, I might opt to duplicate the entire track, and then do whatever it takes to exaggerate the element I need enhanced. Finally, I’ll edge the duplicate track against the original ever so slightly.

For example, take the old kick and bass war. If you’ve got a kick and bass fighting each other, conventional wisdom has it that you work right up to—but never past—the point where the gains aren't worth the downsides. This means that you're often left with either a strong bass or a strong kick drum, but a compromise when it comes to interplay between the two. Say, for the sake of this example, the bass is more prevalent, and the kick sounds wimpy.

Pi2F-bump-3-email-header

Kick vs. Bass | Design by Ben Walker

Well, I could be satisfied. I could let it alone. But I'm not. And I don’t.

Instead, I duplicate the original track, apply judicious EQ, gating, and possibly upward expansion until all I hear are the necessary kick drum frequencies.

This may even involve making another duplicate, manipulating it, and sending the result to a buss (with no output) for sidechain purposes. Then I can use that soundless duplicate as a key to gate, duck, or compress the first duplicate.

Yes, it’s complicated. It’s messy. But I’ll do anything it takes to get a second, exaggerated kick going, provided that once it’s folded back in, it’ll naturally emphasize the element lacking in the original mix. It must improve upon the mix, or else it’s garbage.

Here we get back to the title of the article, and all the reasons this technique shouldn’t work—or at the very least, the reasons it shouldn’t be implemented on a daily basis. If done sloppily, this process can absolutely destroy a master. Even within a DAW that handles delay compensation in a smooth and timely (pun intended) manner, you are still subject to smearing; that’s the nature of physics when recombining two subtly different signals. And, as in the last tip, a linear-phase EQ used here can definitely introduce pre-ringing artifacts. These can be quite jarring, especially in the low range (they can sound like a reversed, ugly, and resonant “thwump” anticipating your transient).

As always, you must use your ears to judge if the compromise is worth it. If it is, that’s all that matters.

3. Use Region-Specific Processing with RX

Yes, the website you’re reading happens to be iZotope.com, but even if it wasn’t, I’d be recommending this tip: RX 6's spectral editor is not just a powerful post-production tool, but a scalpel that, in the hands of a skilled mastering engineer, can scrape tumors right out of the body of the mix.

Here's an example that recently came my way: A kick in a mix was flubby, with too much activity in the 300 to 400 Hz area. It didn't mask the guitar's meaningful information, or the vocal’s presence, but it somehow rubbed nastily against the bass and the keys. Unfortunately, a static EQ drained all the life out of the track. A dynamic EQ, no matter how I pushed it, gave me pumping that I didn't want, due to the timing of the rhythmic elements and the placement of the frequencies.

From my experiences in post-production, I have more than a passing familiarity with RX 6, so I fired it up, loaded in the track, and took a look. Sure enough, each kick's problematic frequency bump was laid out in explicit orange. It became clear exactly when the offending transient sounded, when it ended, and most importantly, what level it should be in relation to the surrounding color scheme.

It was a painstaking process, but I went in and manually gained the specific regions down until they no longer offended. The result was clean, natural, and low on compromise.

Why this shouldn’t work is more of a workflow issue than a quality hindrance. While the risk of doing damage to the overall sound is not especially amplified—especially if you’re skilled in the ways of RX—you are taking yourself out of the musical aspect of mastering. Using an intensely granular process, you’re zooming in on troublesome areas by sight, and with that comes perspective problems that could wreck your headspace. You can miss the forest for the trees, as they say. And the time-drain as well could sap your energy for the rest of the day’s work. Once you get a groove going in mastering an album, it often doesn’t pay to disrupt that flow. But in cases where there’s no other viable solution, it’s good to have a handy tool like this up your sleeve.  

4. Use the Meter Instead of Your Ear

Here’s a tip you hear a lot: “let your ears be the guide,” or “mix by ear—not by meter.”  I would never advise against these maxims. However, a situation arose where using the meter to supplement my ear—as a reality check for my ear—greatly improved the quality of my masters.

See, I have a problem with specific frequencies. It’s not surprising, as we all have our predilections. My particular ear is enervated to no end by the area between 2 to 4 kHz. This can sometimes cause problems in my mixes, as this frequency band is essential for translation: the bulk of communicable information lies between the areas of 200 Hz to 5 kHz.

Thus, my personal preferences can trick me into leaving holes in the master that shouldn't be there. I’m willing to bet the same is true for you. In this case, your frequency analyzer can act as a protective measure.

One day, as I was equalizing during a master, I thought about the general consensus of what the frequency response should look like on a textbook pop track. It’s something like this:

All About That Bass Spectrum Analyzed

Source: The Sound Blog

Immediately I opened my frequency-analyzer and noted a consistent hole in my master, predictably in the aforementioned 2 to 4 kHz region. I decided to fix the issue with a boost, ignoring the readout on my EQ itself and just watching the frequency analyzer until I was pleased with what I saw—not heard.

Then, I closed my eyes and listened as I a) switched between monitor sources, b) folded the mix to mono and shut off one speaker, and c) walked around the room (obviously I opened my eyes at this point). I was entranced by what I heard—not because it sounded better, but because it sounded more similar from each vantage point.

So I gave my ears a break, bounced an A/B comparison, and took the files to my laptop speakers and a car stereo. My findings were interesting: Even though I didn't like those frequencies in my normal environment, the overall master improved in terms of translatability.

Perhaps this is confirmation bias at work, but it has resulted in happier clients (i.e. fewer notes). Now I always flip on a spectrum analyzer at some point in the master and make sure there aren't holes. If there are, I can correct for them.

Of course, this isn’t the final process. I still tweak by ear. That’s why this technique, which arguably shouldn’t work, ends with me securing better results: it’s not a replacement for using my ears. It’s a system of checks and balances.

Conclusion

The ultimate point of this article, other than to show you some tips and tricks, is to encourage you to be enterprising and creative in the face of compromise. When you’re mastering a track, you can choose to accept the compromise (and often you must), or you can choose to push through, work as hard as you can, and achieve a result which is...well, one to five percent better than nothing. Still, this could be the one-to-five percent that really matters! It could be the difference between “great” and “excellent.”

If nothing else, remember that you can let go of the fear that what you’re trying looks stupid. You have my permission. And what’s more, you have my solidarity. Let me explain:

Remember that mid/side technique I told you about? Remember how I said that my routing was “slightly different”? I’m not embarrassed to tell you how it was different, even though I probably should be: what I had envisioned originally with was not an M/S technique, but a whole cockamamie scheme I thought of as an "L/C/R matrix." It was very confusing, very elaborate, and it looked like this:

LCR Image

It was only in subsequent conversations with a very respected engineer (cited as an authority in Bob Katz's seminal book on mastering, no less), that I was asked to reexamine the premise of this matrix. In doing so, I realized that what I had reverse-engineered with five tracks and three busses could've been handled with two tracks, some conventional plugins, and one buss. To put it another way: my so-called “L/C/R matrix” with its frequency attenuation, nulled with the tip I presented earlier. It was my inelegant thought process that led me to this needlessly complicated solution—but here's the thing: my inelegance, at the time, worked. Later on, being called out on it by my betters helped me to learn its underlying principles. That’s why I’m not embarrassed, either to try it or to share my trials with you.

So don't be ashamed to try out a new technique. Just make sure show the results to your peers, and let people you trust judge to the resulting master. It all comes back to that old chestnut: “if it sounds good, it is good.” While mastering, however, you need to make doubly, trebly, even quadruply sure it sounds good. Don’t tip the balance—especially with unconventional techniques.