No two people sound exactly the same, and no two songs are exactly alike, which can make it challenging to provide tips on how to fit a vocal into a mix. We can give you all sorts of practical advice on EQ, compression, and other techniques, but you need to find ways to port them into your own brain. That takes practice, time, and perspective.
With that in mind, here are nine perspective-based tips on how to fit a vocal into your mix. I will highlight tools offered by iZotope and show you how to use them wherever applicable, but I also hope these perspective-based tips will give you the framework to make quality judgments in your own mixes.
1. Ask yourself the following: what does it mean to fit the vocal into a mix?
Phrases like “fit your vocal into the mix” are thrown around with great abandon on the internet, but what do they actually mean? Is it a matter of adjusting the EQ to fit the arrangement and mask recording imperfections? Is it paying attention to the level of the vocal each step of the way? Is it gain-staging, adding effects, and embellishing the vocal with clever edits?
These are a series of maddening questions that can easily drive you insane. The more apt question is, “what does it mean to fit the vocal into this mix?” You’ll want to make it a habit to ask yourself this question every time. When you do, the answers will present themselves more and more clearly.
You ask the question, and then notice the vocal imperfections that need to be tamped, or frequency ranges that conflict with the guitars. It becomes apparent that this vocal should be buried in the mix, not boosted to an unnatural degree, because the aesthetic of this particular song calls for it. You ask the question and realize that reverb has no place in this particular tune. You ask the question and realize that a delay throw on one particular phrase in the chorus is necessary to drive the point home.
The answers are always different, which is why you always ask the question.
2. Consider the rough mix a guide
Consider that you don’t really know how to make the vocal fit with the mix—you only have an idea of how it should fit. It might be different from what the producer or artist intended, and you have no way of fully grasping their perspective, right?
Well, not quite: if the mix came from an artist or a producer, it probably came with a rough mix attached, an example mix that helped everyone agree to move the song along. If you’re recording someone else’s project, on the other hand, chances are the artist wanted something to take with them, so you deliver a preliminary mix also called a rough mix at the end of your first session. When I’m engineering for artists, they always come away with a mix sporting a little processing on the vocals, something that shines the best light possible on the track in the small time I have.
Having played in bands, I know from experience that nearly all engineers do the same thing: the rough mix leaves with at least a little polish so that even if it’s an early, static mix, it’s a static mix that still sounds good. If you’re the artist, you’re probably exporting the track to test on different sound systems—in the car, on your headphones, etc.—while it’s still in progress, so it’s helpful to have a rough mix that’s as close to the desired, finished mix as possible.
The point of this is that there’s always a rough, and the rough mix can always be your guide. There’s something about the energy of a same-day rough mix, something about its immediacy and urgency, which always serves as a lightpost when judging your own vocal in comparison. When you’re mixing your own project, reference the rough mix with a complete vocal unless it’s really, sincerely bad.
Now, as you go through the various stages of mixing your vocals, always judge the placement of the vocal against this rough mix. Be ruthless with yourself. Have I preserved the immediacy of the rough mix? The excitement? Have I ruined it all with some frequency cut? Is it now too quiet? Too loud? Try never to lose the impact of the first, most urgent mix.
3. Identify and use the best references
The tune will not exist in a vacuum. It will almost certainly remind you of something you’ve heard before, so use this to your advantage. If the singer reminds you of Ted Leo, bring in the Ted Leo track. Any vocal reminiscent of the one you’re working with can be a guide, if only to hear how others mix to the timbre of that particular voice.
Conversely, you may hear the arrangement/production of the tune and feel what it’s supposed to sound like right off the bat. You may be reminded of a comparable song with comparable feeling. Drag this tune into the Reference section of Ozone 9 and refer to it intermittently as you work on your track. Check out the video below to learn how to reference tracks using the Reference section in Ozone:
Between these two references—one matching the singer and the one matching the song—you’ll have good timbral goal posts as you approach your mix. Go about mixing the vocals in all the ways we’ve shown you before, but compare your work to these references from time to time.
Your goal isn’t to make the vocals sound exactly like the first mix or the second, but to bridge the gap between all three as best you can. Try to make the three different tunes congruous in a playlist, simply by the work you’re doing to the vocal.
4. Consider a standalone application to clean up your vocal
After setting up a static mix, I examine my comped vocal in the RX 7 Spectrogram and do some initial touch-ups. Doing this may seem slow at first, but it really speeds things up in the long run. I find that taking a pass in a standalone editor helps me polish the vocal for mixing in a more objective way.
When I make the vocals my only focus, polishing them up under this microscope satisfies the part of my brain that loves puzzles. I seem to do a better job when I take this approach, as the tools afforded to me in RX 7 Audio Editor are tremendous.
In this example, the Gain tool is my first line of defense against the dreaded “ess” sounds. I can quickly recognize and attenuate them in the Spectrogram display, listening back to make sure they’re at an appropriate level.
Speaking of level, I can use the Leveler module to even out dynamics without incurring the effects of compression.
Because I have my DAW session open at the same time as RX 7, I can always flip back to it for context when leveling. I learn what phrases need to be a little louder, what words fall off, what words need to remain hidden—surprisingly, a big part of the job is knowing when to leave a phrase a bit buried so as not to ruin it in the mix.
Clicks and pops become painfully apparent in the Spectrogram, and I can remove them without compromise using the Mouth De-click or De-click modules. Stray plosive? Not a problem: there’s a tool for that. With my eyes, I can see enharmonic distortions that either clip or lead to clipping with further processing, and I can erase them with the Paintbrush tool.
As I’m going through, I will round-trip my vocal back into my DAW—this operation changes depending on the DAW, of course—and compress the hell out of it for a final review. Why? This is my best test for revealing room tone and forensic issues.
There may be room noise surrounding the vocal from the start. If the mix is a noisy garage affair I might not need to de-noise the vocal at all. Then again, it may be important to do so depending on the desired feel of the music. Compressing them mercilessly so that the atmosphere around the vocal is made plainly apparent will inform how I proceed with the mix.
Note: I will not use this compressor in the final mix, of course. This is just a diagnostic tool.
5. Keep the vocal in for all of the mixing stage
The vocal doesn’t exist on top of the arrangement—it is a part of the song. Choices you make to the vocal impact everything else in the mix, and it’s therefore unwise to work without the vocal in place. If you save the vocal to the end, you risk delivering something that sounds more like karaoke than a finished product.
Every engineer has their own MO for how to handle this conundrum, and things can change from song to song. In a typical workflow, I start with some sort of static mix at the outset. After that, I work on the relationship between the drums and bass. During this pass, I do not mute the lead vocals.
Instead, I instantiate a trim plug-in and bring the lead vocal down 12 dB or so. The vocal remains dimmed in the mix as I’m adding EQ and compression to all the drums. When I’m auditioning the interplay between the kick and bass, I bring the vocal back up to its original level and make sure they’re all playing nicely. The same goes for the snare.
Here’s something I do that other engineers might not: when I move on to the harmonic instruments and background vocals, I will mute the drums entirely. Why do I do this? Honestly, because I’m a huge Alan Parsons fan, and he once evangelized this technique; I received the sermon at a young age and promptly fell in line.
In any case, something about taking the drums away really works for me, and I’m able to get the appropriate interplay between the vocals and instruments without paying attention to rushing, thwacking transients.
I share my idiosyncratic methods to highlight that we all have idiosyncratic methods, just as we all hear sound differently. However, the importance of the vocal is not sacrificed to my quirks. In your own methods, always aim to preserve this most vital element while you’re judging your mix.
6. Don’t use processors “just because”
If you were to base your method of operation on tutorials and videos alone, you’d think every vocal needs EQ, de-essing, compression, reverb, and delay. You might even think there’s a specific order to it: subtractive EQ first, compression, then an EQ boost, some de-essing, followed by additional effects—but this misses context. Your vocal might have been recorded with the perfect amount of compression already, so why would you squash it further?
The vocalist might also never need a de-esser. I’ve mixed and mastered quite a few tunes for the band Leland Sundries, for example, and their lead vocalist has never once required de-essing. Not ever. I’ve been in the room when the vocalist has recorded, and I can attest to the ribbon, dynamic, and large-diaphragm condenser mics used on his voice. Never once have I had to de-ess him.
You never know what a vocal needs until you listen to it. It might seem obvious, but engineers who operate in the digital age often look to cram things through the pipeline of digital presets. Don’t fall prey to doing so before listening to the vocal.
7. Use the Intelligibility Meter in Insight 2
If I put an instance of Relay as the last plug-in on the lead vocal chain and at the end of every instrument submix, I’m able to monitor the intelligibility of my vocals in relation to the rest of the material. This meter is quite handy, as vocal intelligibility is a hard mark to achieve. Intelligibility is, after all, entirely dependent on the material.
What remains constant is this immutable fact: whether I need to bury the vocal or raise it up, I need to check my results with some measure of objectivity. The Intelligibility Meter, particularly when set to low-noise-level environments, can aid in making an impartial judgment call.
If my vocal stays in the sweet spot and sounds right to my ears, there’s my answer. If it soars above or falls below, then I listen to what my ears tell me and decide based on that.
8. Use subtle delays and distortion to bolster the vocal
When it comes to lifting and strengthening vocals, you have other tools at your disposal besides EQ and compression. One of my favorites is the addition of a slight slap echo, usually timed to a fast, prime number. I’ve long felt that an echo out of sync with the song’s tempo can help a sonic element stand out. I’m also of the school that puts a certain magic on prime numbers.
I’ll send my vocals to a delay of no more than 91 ms, judging the best timing by the music and vibe of the tune. Then, I’ll dip that delay to a low level—even -20 dB may be too much. I may or may not EQ the delay, but I usually notice that the vocals now have a dimension to them they didn’t exhibit before. And when I take away the delay, it’s gone!
Here’s a mix, presented sort of at the beginning of the mixing process. Note the level of the vocal, which I’ve intentionally buried a bit.
Now, I’ll add a delay that fits the tune—just a simple, run-of-the-mill delay set to 61 ms.
We sit it back in the mix, and this is the result.
Vocals Sitting Back
It’s not obviously louder, but compare the first example with the final one, and you’ll hear that the vocal appears to be stronger.
This can also work with distortions, or by introducing weird, artifacting sounds with a tool like VocalSynth 2. In certain arrangements—already electronic—there’s something wonderful about introducing a low-level, granular distortion.
Here’s a horrendous sounding bit of mangling via VocalSynth 2:
If I back that down to -11 dB and put it in the mix, it sounds like this:
-11 dB Vocal
9. Make your peace with automation
To give vocals a final polish—a final interplay with the mix, if you will—there’s no way around automation; you will need to automate something. Maybe it’s the overall level from verse to chorus. Perhaps you need to automate the de-esser, whose normal settings don’t work for one or two sibilant moments. Maybe you need to ride the level of the send hitting the delay.
Whatever it is, count on automation being a necessary part of the process. Make your peace with it. Great mixes are musical mixes. Musical mixes are dynamic in nature. Dynamic means “changing from time to time.” A single note held with the same velocity, with no beginning and no end, is not called music; it’s called a test tone. There must be variance for there to be greatness, and when it comes to a lead vocal, the best tool for variance is automation.
But automation is also scary; it keeps you boxed in. You forget that you’ve forced your fader into a box, and sometimes it drives you nuts. You try to lower the volume afterward and find you can’t. You drag a region to another place and the automation follows. It’s a lot to keep track of!
My workaround for automation is simple, if a bit ungainly. I devote a plug-in to level automation at the end of the chain, so I can still raise or lower the fader. I may also have a second de-esser or EQ in line for automation purposes. This makes it much harder to make mistakes when moving an automated region around and lets me keep my hands on the faders if need be.
Getting the vocals to sit right is not a set-it-and-forget-it situation. Ask the questions, set up your references, and then do whatever it takes to get there. If you don’t know what to do or if the tools at your fingertips don’t make sense to you, then you have all the tutorials written by myself and all my estimable colleagues.
An example: You may not know that the vocal needs distortion or reverb to mask an imperfect recording. You may not know what comb filtering is, and why it’s turned your particular vocal into a hollow mess. But you do know the recording doesn’t work for you, and you can pinpoint the issue with your ears. You can say “this sounds hollow,” and work from there. Furthermore, you know that we’ve provided articles about how to address such problems.
You may need to do more research when you encounter a specific problem, but if you don’t give yourself the proper perspective to identify issues with your own ears, you won’t know which questions to ask. There’s no harm in asking the question and doing your own research, while there is a great deal of harm in remaining ignorant once you know there is a problem.
With that, I leave you—may your vocals always fit properly into your mix.