So you have an interest in mixing music, and you’re looking to avoid the pitfalls that swallow up lesser engineers. That, dear reader, is very wise of you.
What follows are some of the biggest traps I’ve seen bedevil nascent engineers. If you find yourself guilty any of these, fear not, I’ve been guilty of them too! So have we all. Read on, and learn from our mistakes.
Piling oodles of plug-ins on individual tracks is by far the biggest mistake I see in beginners. We’ve all fallen victim to this, and it’s hard, because we see our idols do it on tutorials, in blog posts, and sometimes in person, if we’re lucky.
But our idols have a method to their madness. When Dave Pensado displays a six plug-in chain on a YouTube video, he’s showing you a step-by-step blueprint. He knows why one EQ might be right for the high-pass on a track, why the next is appropriate for boosting the high mids, why such-and-such compressor has the perfect complementary tone, and most importantly, how all the plug-ins are going to work together.
Yes, it’s important to search for the right moves, to experiment with new tones, but experimentation can be detrimental—in the mixing phase at least—if you don’t have an idea of the sound you’re going for in your head, if not a concrete reference of the sound on hand.
I once watched a talented engineer who had just graduated from audio school sit with a classical recording. Little by little, he turned the sound of a baritone vocalist into something overly reverberant, a bit harsh, and altogether unnatural. Eventually, he turned to me and said “it’s not working, is it?”
So he did something daring: he took off all the processing and limited himself to two plug-ins and one send effect. To his surprise, it turned out quite well. I’ve since heard classical recordings this man engineered, and they are as excellent as anything I’ve listened to.
I take no credit for his evolution—I didn’t offer any advice or criticism; limiting himself to two concrete plug-ins was his idea. I only watched him learn that you don’t have to work so hard, especially if you don’t fight the natural/different busses.
It comes back to this: Whenever you’re reaching for a new plug-in, do you know what you’re trying to achieve with this next move? Are you serving or fighting the sound?
Of course, the answer to this question is dependent on maintaining a clear picture of what you want to accomplish, which brings us to our next pitfall:
This is an affliction that doesn’t just affect beginners—my peers and I get caught up in this one all the time. It can be enthusiasm as much as anything else: We just want to keep working, so we often don’t let ourselves stop to wonder what we’re trying to accomplish in the first place.
Now, a contractor would never build a house with only a loose idea of where the bedroom is. So why do we think we can fashion a mix with little idea of how we want the bass to end up?
It may be suitable for artists and producers to mess around, but I’d wager our job, in the mixing music phase, is more like artfully executing blueprints than painting a landscape. If you agree, then it’s best to have a clear idea in mind—even if it’s only a glint—of what you want to do before you set out doing it.
You’d think in this world of floating point mathematics that conventional gain staging can be laid to rest. However, many plug-ins—especially analog mods—respond to the strength of signal in the manner of yesterday’s gear. If you use analog gear/emulations on an aux track, juicing the level of the instruments feeding that aux might distort the sound past what you’d want. This can become problematic, especially in large sessions, where you need to pay attention to the levels of many moving parts.
Also, you’re stuck in your own system if you don’t pay attention to good gain structure. What do I mean by this? Live mixing, working off a producer’s session, using an analog board in any recording capacity—these money-making tasks are much harder to execute when your modus operandi doesn’t play with these formats, some of which call for proper, analog-style gain-staging.
That’s why I tend to treat signal flow within a DAW as I would an analog console. It keeps me more mobile, should I need to be. It also helps keep my sessions more organized, in case I decide I need to assign channels or whole groups of channels to new/different busses.
When you’re just starting out, it’s hard to know if two signals are out of phase, especially if no one’s around to teach you how to recognize the predicament.
I remember one song sent to me by an old friend; he had complained about a generally washy feeling to the drums in the rough mix. The overheads, it turned out, were out of phase with each other—just flipping the polarity on the right overhead greatly tightened up the mix.
Indeed, drums are often problematic, so here’s what I do when presented with a multi-miked kit: I check the phase of the overheads against each other, flipping one of the overheads to see which gives me a more cohesive, solid picture. It’s usually a night and day difference.
Then I test the other mics against the overheads in solo. I listen for which combination has more body in the lows and low-mids. I also watch the meters—chances are, the polarity arrangement that yields the higher level will be the one that’s in phase.
But equally troubling for beginners, I’d wager, can be knowing when to leave elements out of phase—or knowing when to manipulate phase relationships for intentional effect.
Drums don’t usually apply here, but multi-miked guitar cabs do. In this case, you can think of the phase relationship between two mics as an opportunity for tonal variation—an EQ, almost. Keep in mind that the quality of sound will change depending on the relational level of the tracks too.
To sum up: when mixing elemental instruments like drums and bass, check for phase, and try to favor the cohesive picture. When mixing elements are not so foundational to the track, learn to use phase relationships to create the best tonal picture.
I remember my early fear of a dry signal. Out of this fear, I’d slap reverb on nearly everything. But in my early days, this approach yielded nothing but a pea soup of sound.
I was not yet cognizant of how differing reverbs signify particular trends or genres, or how some sounds might have been recorded with reverb already—guitars being a good example, but also synths given to you by a producer.
Once again, learning grew from limitation. So if putting reverbs on every track sounds like your bag, I invite you to try the following: Limit yourself to only four or five reverbs across a whole mix—and to shoot for fewer if possible. Maybe apply some verb to the drums, the vocals, a touch of “exploding snare,” and a spring verb on an otherwise dry guitar.
The same goes for other effects. In our efforts to make everything interesting we can dull the overall impact of the entire mix. Therefore, learn the intentionality behind modulation, delay, and conventional pitch variance. Understand what, exactly, a phaser will get you, as opposed to a flanger or a chorus. Learn how delays can expand the spaciousness of a sound (a synced, low-level stereo delay), or establish a genre (a rockabilly slap, for example).
For related reading, check out our blog “9 Common Reverb Mistakes Mixing Engineers Make.”
I’ve written about this at length in other articles, but it’s a classic mistake, one not ameliorated by the plethora of YouTube tutorials out there. Sure, plenty of big names have great tips to offer, and to demonstrate these tips, they’ll often play their results in solo so you can better hear them. However, these engineers don’t always remember to warn you about working in solo; if you didn’t know, and if you stumbled on the video, it might convey the wrong idea.
So let’s bow the old saw and reiterate that generally, it’s not good to mix a single sound in solo, as you lose perspective quite quickly. Still there are caveats:
For brief moments in time where it’s necessary to home in on a problematic part of the sound—like a resonant snare drum—soloing is appropriate. Also, there is nothing wrong with soloing a group of tracks. To mix the drums, the bass, and vocal in solo in order to achieve a better micro balance is useful in short intervals.
So many times I’m presented with a mix and asked, why doesn’t this sound like the real thing? Half of the time there are sonic issues, but often it’s the editing: If something is off key or out of time, it falls upon our shoulders to fix it as best we can—but always in line with the artist’s intentions. Nobody is going to autotune Bob Dylan (at least, I hope not), but Justin Bieber is another story.
Similarly, a band like the White Stripes would get more off-the-grid leeway than an outfit like Imagine Dragons. It behooves you, again, to vet the intentions, to check the references, and to make the changes.
Ah the dreaded smile curve! It’s brought many a frown to budding engineers the world over. Rest assured, we’ve all been there, piling on bottom end and treble as though they’re ingredients that can never go sour.
This is one of the larger mistakes, leading to ear fatigue and poor translation across speaker systems (you’re already applying a “Beats” curve to music that might very well be played on Beats headphones—and Double Beats is never good!).
I don’t know if there’s a remedy for this but time and referencing. In my experience, the longer I engineer, the less I feel the need to exaggerate bass and treble. As with earlier pitfalls mentioned, discipline in accordance with referencing is the way out: don’t do it unless you know—and have verified with a likable reference—that the track calls for it.
This pitfall is easily understandable, because everyone wants to have a fully polished record right from jump. Who among us hasn’t fantasized about the mastering engineer saying “this needs nothing”? Making the desire more tantalizing is the plethora of tutorials where grammy-winning mixers show us their fully-limited mix-bus chains.
You must remember, at your beginnings, that these are seasoned engineers—and one day, you will be too!
When you’re just starting out, it’s good to match a master in terms of timbre, sure, but not in terms of loudness or level, because the tools to secure those higher levels are harder to hone.
Instead, bring the reference down to give you headroom, so you don’t have to fight with the digital ceiling. Give yourself the chance to learn how signals play off of each other with reasonable headroom before you worry about shaving off the peaks. Otherwise, you’re in danger of fashioning a harsh mix, and what is worse, prolonging the learning process into a series of plateaus. It’s like that old phrase: you’ve got to learn to walk before you can run.
Having written this article on ten common beginner mistakes, I’d like to invite you to do something counterintuitive: Dive head first into every one of them. Take a project you’ve already mixed and start from scratch. Spend an hour on a weekend working on everything in solo, applying reverb to every track—and doing all of it with a complicated master chain in place.
This isn’t meant to be snide, negative reinforcement. Rather, I think two things may come of this enterprise: you may hear for yourself the detrimental effects of these pitfalls, or conversely, you may stumble upon something unique and amazing. Both outcomes are great, and they both serve the larger goal of experimenting.
There is nothing wrong with experimentation, though you may find controlled experimentation to be of better service to your growth, to your deadlines, and to your clients. So go to it, armed with these potential pitfalls!
Get top stories of the week and special discount offers right in your inbox. You can unsubscribe at any time.
Copyright © 2001–2020 iZotope, Inc. All rights reserved.