For the audio engineer, equalization is one of the most important tools—but I’m not telling you anything you don’t know. We carve space for our favored frequencies in much the same way a sculptor slashes statues out of stone.
Like all our most important processes, however, equalization has the potential to be abused, and rather quickly at that. What follows is a list, in no particular order, of twelve common EQ mistakes. As with a previous and similar article on compression mistakes, we'll offer remedies for every malady.
And just like the other article, we'll provide the following disclaimer: if you find you've committed any of these sins, don't be hard on yourself—so have I! So have we all.
We often make this mistake in the beginning of our careers; I know I did. I'd listen to my favorite records, note their vibrancy and brilliance, and think I’d have to push the top-end on every track to get in the same ballpark.
Before I knew it, I’d invariably mixed a tin can. Like a poor craftsman, I tended to blame my tools: if only I had expensive plug-ins instead of these measly stock processes! Imagine the sweetness I could impart to every track!
But piles and piles of Pultec emulations didn’t sweeten the pot; they only brought bitterness, harshness, an ear-shredding mess. Death by a million treble boosts produced another problem too: all those shelves must begin somewhere in the frequency spectrum—often lower than I’d intended. 6 kHz, 7 kHz, 8 kHz, my mixes suffered from an abundance of problems in these ranges.
It took a lot of wasted money and not-so-wasted time to realize that shelving everything got in the way of the very brightness I sought. It turns out, though, that if I can resist the desire, trust in the finished product, and see the statue hidden in the marble, then I can uncover the true secret to satisfactory brightness: you only need one or two elements to brighten a whole track. A vocal at 16 kHz here, some overheads at 10 kHz there, and maybe (just maybe) a slight baxandall boost on an important buss—but leave the rest alone. You'll be surprised at what happens if you trust in the innate brightness of the material.
Here's another tip: if your mix sounds too dull in comparison to your favorite mastered track, drop the level of the reference so it matches on the Loudness or RMS meter. You might be surprised what the loudness is adding in terms of high-end presence.
Similar to slapping on a compressor “just ’cuz,” needless high-passing is a life drain. Yes, sometimes nasty bits of thump and hum swim in the depths below 100 Hz, and for these, a cut can help. But resonances of a most pleasing, chest-pumping variety also lurk down there. You don't want to lose some life-affirming low end just because you saw someone say to cut everything below 100 Hz in a tutorial, do you?
Indeed, strange pieces of advice often crop up around high-passing, such as "find the instrument's lowest note and high pass there." The thinking, I believe, is that within the context of the mix, there's nothing of value below that frequency.
Okay, but here’s a hypothetical: Say your client paid top dollar to record her trumpet in the finest recording studio in the world. Sure, we're taught to mitigate needless room sound, but is this particular room sound—the finest in the world, mind you—needless? Could vital information denoting this sweet, sweet location lurk below the instrument's lowest note?
Let’s take the hypothetical even further: what if this trumpet player only blasted high C's for the whole song? Should we cut everything below that frequency? I’d say no. You’d lose too much of the space.
As always, context is everything when taking tips into account, and if there’s anything I want to impress upon you, it’s that high-passing is all about context -- you don’t just do it willy-nilly. So if you need to high-pass, here are two tips:
First, protect any vital resonances. You can do this by adding a parametric boost just below the low-cut, so that its downwards slope raises the low-cut a little bit. Here’s a picture:
Secondly, make sure your monitoring situation is accurate when dealing with low-end. Know the frequency range of your monitors and reference cans, audition any low-pass filters with and without your sub (if you have a sub), and make sure you know your room (this will come into play later).
This is probably the biggest mistake I could mention here. Others could mention it too—heck, you're smart; you've probably come across similar articles and seen this mentioned; maybe you're even expecting me to go into it, and are wondering why it’s appeared eight hundred words in.
Indeed, the problems of EQ'ing in solo have long been storied: You start by correcting a track with no reference of how it fits into the bigger picture, surely incurring unnecessary problems down the line. Quite often you over-process the track or overextend certain frequencies, and when the solo’d material is placed back in context, the resulting track fights the vocal, the snare, or some other important element.
Now, here’s the question: If we all know that this is bad practice, why do we fall into this rabbit hole? Why does every article and tutorial surrounding EQ mistakes inevitably bring it up?
Because soloing falls within our basest instincts as engineers: when there’s a problem in our studio, we troubleshoot it by testing one item at a time, often starting with a cable. Likewise, when we hear a problem in a specific instrument, we want to home in on it, and a great way to do that is to hit solo. Then we sweep for the problem (another issue we'll cover), but upon fixing it, we notice something else.
Then, of course, the creative solutions start pummeling us… “Ooh I could add harmonic distortion to the low-mids and warm those up, that would be nice...” “…hmm, seems a bit out of hand, how about some multiband compression…” “…Hey, aren't I forgetting something?”
Yes! You're forgetting to mix! When you're making a salad, you don't spend hours cutting up a single carrot! You mix ingredients together. The same applies here: hit that solo button to confirm your deepest fears, and go ahead and let yourself deal with that one troublesome resonance. But then immediately put the mix back in—or at least fold in a couple of other tracks to give you some context. You'll wind up chasing your tail otherwise.
This isn't the biggest mistake you can make, but it might be one that separates the wheat from the chaff, so to speak. Certainly, in my experience, an excess of information in the low-mids makes a tune sound less "radio ready."
Yes, we can be rescued in this sin by our saviors, the mastering engineers; they exert some finesse in helping us out of the low-mid tub. They’ve indubitably helped me, and I’ve tried to pass the favor along to others.
But still, why not fix it yourself?
It's roughly that 200 Hz to 600 Hz area I'm talking about here: if you place an inarguably inferior mix next to a proven song in the same genre, you'll surely notice not only the comparable dullness of the top, but the fat around the lower middle. This band, if not divvied correctly among instruments, can take away from the precision of transients, the power of your harmonic backups—be they guitars or synths—and contribute some indistinct qualities to the vocal as well.
Many reasons contribute to this mistake, but I'll lay two down right off the bat: inferior monitoring/listening environment, and an underutilization of referencing tracks.
The first problem is self explanatory: if what you're hearing isn't correct, you'll never know whether you're making the right moves.
Interestingly enough, this problem can be solved, at least in part, by addressing the second issue. Yes, you should acquire decent monitors (easier to do at cheaper price-points these days). You should also hang up some room treatment, and, as a last resort, use some sort of DSP compensation.
But you can also train yourself to understand the inadequacies of your monitoring system with reference tracks you know particularly well. It goes back to that "childhood" mix I talked about a few articles ago. If you play a mix you know from your childhood through your monitors and take note of where you hear palpable differences (such as, "Hey, this sounds quieter in the low-mids than I remember") you'll be clued into how your room or system isn't accurate in that circumstance.
Dynamic EQs—as opposed to multiband compressors—seem to be in vogue these days. Though the line between the two processes blurs, it's easy to see why a dynamic EQ would come in handy: without messing up other bands, you can effortlessly select a specific range of frequencies and process those to your liking. Yes, sometimes a dynamic EQ is actually preferable to a static one, and yet, sometimes engineers resolutely hold to their fixed EQ.
Here's an example of when it might be wiser to grab a dynamic EQ: say you have a build up in the vocal of that dreaded harmonic resonance point, 2 to 3 kHz. You try a fixed EQ in that range, but wind up draining the singer's luster.
Do you compromise on the cut, living with some harshness? You could—or you could switch to a dynamic EQ. With such a process, you have more control over how the EQ starts behaving. If the singer only hits that resonance during louder passages, a dynamic EQ could help tame these frequencies when the vocalist starts belting.
Conversely, a dynamic EQ can help with a more constant problem: when we cut those troublesome frequencies with a fixed attenuation, they're always heard at lower levels. But a dynamic EQ gives you a time constant—an attack and release; you can set the EQ to let a little of the meddlesome resonance through, tricking the ear into thinking nothing is unnaturally missing, but simultaneously addressing the issue. Gone, but not forgotten.
The same applies to lower frequency bands—a bass with an uneven bloom in the four hundred range, for instance. Tubby guitars, of the sort referenced in the preceding tip, can also be addressed with a dynamic EQ.
The inverse problem also rears its head; as dynamic EQs are all the rage these days, they can be abused. An engineer can apply them all over a channel or buss, thus promoting a weird, inorganic kind of multiband compression. As referenced in earlier articles, improper multiband compression can lead to a host of problems.
The key, as always, is to listen. If the dynamic EQ you’ve enabled has had an unnatural effect on either the overall resonance or dynamic interplay of the track, that’s an indicator it isn’t the right tool for the job.
This is a common mistake people make when starting out, because it's hard to know the difference between your standard EQ and its linear phase sibling without an explanation; also, your DAW tends to provides both, and sometimes they look the same! Perhaps you’re wondering, why and when should I use one over the other?
Your typical channel EQ, unless otherwise indicated, causes phase-shifts when singling out frequencies. That is to say, it not only moves frequencies in level, but in time. This might seem unwanted, but have no fear: the resulting delay is often desirable—the particular phase-distortion an EQ imparts can very well be tied to its sonic signature.
Desirable or not, what these EQs do lack is literal transparency, even in their cleanest iterations. The time differential ensures this. A linear-phase EQ, on the other hand, takes these delays into account, recombining the signals at the output in a way that mitigates this delay. The result? An EQ often described as "lowering the fader on a frequency" rather than imparting any color.
But there are issues with linear phase EQs too. They can bring about a horrid little noise called “pre-ringing.” Also, since they attempt to realign any time disparities at the output stage, they can introduce overall latency—especially if they don't speak nicely with your DAW. On a stereo mastering session, this can be fine, as you’re mostly working with one stereo track. On a mix, you might get away with a few linear phase EQs (depending on the DAW), but piling them on can become problematic, even with good delay compensation.
Here's an example: have you ever worked with two kicks in a production, EQ'd one of them, and noticed there was something funky about the way they hit together afterward? A peculiar flamming you couldn't get rid of? This could very well be the culprit—especially if your EQ is switchable within the plug-in itself. Check to make sure!
Sometimes EQ isn't the right job. You can use a hammer for a screw, sure, but you'll strip the paint off the wall surrounding it. It's better to bang on a different item, such as a nail. Similarly, sometimes a well-executed pan move, or a level change, minimizes the need for frequency tailoring.
My wife often comes in while I'm mixing to ask me if I'm hungry yet. I reply that I'm not, and that'll be my answer to breakfast. When next she arrives with the same question, and I give her the same retort, I'm invariably surprised to learn she is inquiring about a very late lunch; so much time has passed that I have failed to notice the needs of my own body (or marriage).
Working in this way—as I suspect you might, with or without the wife—how does one keep anything like perspective? This is a hefty problem, to be sure. A deadline is a deadline, but that doesn't change how our ears acclimate, react, and in many ways worsen over hours of exertion. EQ mistakes, in such cases, are involuntary and frequent in their rate of accumulation.
The best safeguard against this problem is to take breaks frequently—a fifty-minute timer with a ten-minute allotment for breaks is not a bad idea at all. Such respites, especially if submerged in silence, can restore us to sanity.
However, many are the moments when my clock signified the time, and yet I choose to ignore it. I have a groove going; I’m not about to sacrifice that groove. This too must be factored into your decisions, as there is nothing so bad as losing one's mojo. Thankfully we have a second tool to help us keep perspective, and that tool is a reference mix. Or multiple reference mixes. Read on.
Some people shy away from using reference mixes as musicians might shy away from learning music theory, or painters might avoid traditional brush technique; they feel it robs something from their artistry, their originality. This, I've often argued, is to their detriment, for reference mixes are not employed to turn us into forgers—the very differences of the performance you're mixing will mitigate that concern right off the bat.
Instead, think of a reference mix as a compass (or a constellation) for navigating an ocean at midnight. When the hour is dark, you sometimes don't know left from right, north from south. But your compass knows. The North Star, virtually immutable in the sky as we've come to find it, also serves as an indicator.
So it is in the weeds of mixing; if we've worked for so long and so hard that we don't know what a good snare drum sounds like anymore, it helps to have a good snare drum on hand to reference. Otherwise, we might deprive our share of proper frequency treatment.
I try to have a few references on hand when I'm mixing a track. The first is the client's choice—the thing he or she wants the song to sound like. The others are my own. They could be general (what I want the song to sound like) or specific (what I want the kick to sound like). They are level matched many times throughout the mix, but one thing about them remains constant: after I've made huge decisions in my mix, I refer back to these references to make sure my goals—the ones I've set for myself—are still being met. This keeps me honest.
You hear a pesky snare resonance and you cut it. Then you hear another, due to the focus you've put into listening, so you cut it. Now you hear another. Then another. Soon, you've created a series of notches that sucks the life out of the snare. Does this sound familiar?
Here’s the simple fix: Stop at two notches, and let it rest for a while. If the sound still bothers you after you've moved on for ten or fifteen minutes, add another notch if you must.
Ah, frequency sweeping! This is the practice of boosting a range of frequencies and moving them around to locate the right center-point, either for boosting or for cutting. There are two schools of thought here. Some engineers advocate avoiding the sweep altogether because it changes your perspective (they point out that for every "right" frequency you isolate, you audition hundreds of wrong ones). Some don't care, preferring the speed sweeping affords.
There is truth in both arguments. Me? I go for a halfway approach, because my perception can absolutely be altered by excessive sweeping. But I'm not afraid of the practice, within reason—after all, we have all these aforementioned tools to keep our perspectives sharp. So I sweep at first, but when I'm close to the right frequency, I stop sweeping, and do the following:
I set up the boosts or cuts as I think they should be, but do so in bypass. Then I instantiate and listen. If I’m wrong, I know right away and put it all back in bypass. I rinse and repeat until I’m on the money. It seems like a slower process, but it trains your ear to move faster the more you do it.
Why twelve EQ mistakes? Why not ten or fifteen? I could be enigmatic and say I've given you one for every note of the tempered scale, but that would be rather pretentious of me. What we have here are simply all the mistakes I could think of. I am sure there are more. (Frequency masking! See? There’s another one!)
However, watching out for these twelve mistakes will serve you well. Pay attention to these potential pitfalls, and you'll be less in danger of falling down the rabbit hole. We’ll be sure, in future articles, to dive into specific areas of the frequency spectrum and discuss best practices for dealing with them. You can check out the first of these articles—on taming harsh high-frequencies—right here.