This article references previous versions of Ozone. Learn about the latest Ozone and its powerful new features like Master Rebalance, Low End Focus, and improved Tonal Balance Control by clicking here.
When Latrell James walked into iZotope’s media room, a small study with warm lighting and plush couches, his eyes panned over the array of musical instruments and audio equipment new and old: studio monitor speakers in the corner, a digital piano resting on its stand, guitar amps on the floor. His eyes locked onto an SP 1200, a grey, boxy, beat-making machine. The same audio sampler with which hip-hop producers like Pete Rock paved hip-hop's golden age in the early 90s.
“Nice to meet you,” James said, bowing.
Most people know James as a rapper. His lyrics are sharp, his vocabulary expansive and poetic. His earnest delivery has helped him share stages with the likes of Kendrick Lamar. Others know James as a producer, his fingerprints all over Boston hip-hop and beyond, his beats bound by strings and bassy synths. Most recently, James landed production credits on J. Cole’s label, Dreamville.
Although few know James as a mixing and mastering engineer, any musically-minded person knows that solid mixing and mastering make it possible to achieve a professional sound. I sat down with James to talk about his experience behind the board.
I’ve been using Pro Tools since 2003. Fortunately, my parents bought it for me after I begged them. At first, I thought recording was just recording: pulling the levels down and making it sure it wasn’t too loud. Then I would go listen to someone else’s song on a CD and think, my song sounds terrible in comparison. Around the time YouTube broke, I started searching for mixing advice, and I found little things. But I was also on a forum called Future Producers, and that forum had so many producers in it. Legit producers now. I would post a question and they would give feedback on it, or post a song, and they would give feedback on it. As soon as that happened, they were like, “Are you mixing anything? Compressing? Making sure the EQ is right? Making sure your signal chain is right?” That’s when I started adapting. It was probably around 2007 or 2008 when I decided I’d have music that sounds quality. I would YouTube and be on that forum all the time.
Then Pensado’s Place came out, and everything changed. When I found Pensado’s Place on YouTube...I don’t miss an episode anymore. If you’re not listening to Dave Pensado, you’re crazy. He runs through how he mixes something from top to bottom.
But that was really the start—2007, my music sounded terrible. It was like, why do I sound like I’m inside of a box? Why aren’t my vocals clear? Why is there all this extra reverb on my voice when I didn’t even put reverb on my vocal track yet?
I had to figure out what was wrong with my room, too. I was recording in a room with just wood on the floors—it just wasn’t right acoustically. Eventually, I made a little makeshift thing in the closet, and it worked for what it was. I found out that I could get the acoustics right by carpeting the room, dampening some of the sound, having a proper voice reflector, picking up a pre-amp—all of those things. Then I started mixing after that, and I was like, alright cool, I’m going to learn what compression does. I’m going to learn what EQing is. I’m going to learn how to put an exciter on something and make vocals sound brighter. I learned why you might need to put reverb on vocals to make it sound more natural.
Absolutely. I was going to school and listening to these records. I was playing the Kanye West records and asking how he got his songs to sound so great. It was those records that inspired me to make my stuff sound like this. Even to this day, when a new album comes out, I’ll play it on my speakers because I know my speakers better than anything; I’ve been with them for six years now. I’ll listen to the mix, because every engineer mixes things differently. I’m always intrigued about how they approach something. When I heard those albums—the Kanye West albums—it was like, okay, I need to do this, too.
It wasn’t just hip-hop. I was listening to the band The Darkness at the time, and I was like, how come when I record my guitar it sounds like crap, but when they do it, it sounds amazing? I didn’t know they were feeding it through an amp, a speaker, and then recording it into another mic. I was going directly into a mic.
All the time. When I first started producing, I always had what I called “the car test.” If it sounds good in the car, it might work. But now, I go from speakers to headphones to Apple headphones. Those also tell you the truth about your highs. If your highs are terrible, your ears will hurt!
Wow. See, that’s important. It needs to sound good on all surfaces. I go through all the things people would commonly listen to music through. You want to make sure it sounds good on Apple headphones and million dollar speakers.
Do you produce and mix other genres besides hip-hop?
Mostly hip-hop, some other stuff. I wish I could record more vocalists, because they’re more intriguing—a lot more tracking and blending layers. Hip-hop vocalists tend not to do as much layering.
For my most recent project, I’ve been doing a lot of tracking with live instrumentation. That’s been a challenge in learning how to properly set a mic away from something so it doesn’t clip, because it’s totally different than adjusting a plug-in’s volume. It’s all learning, but watching Pensado’s Place has saved my life.
"When Pensado's Place came out, everything changed."
He’s amazing because in every episode he runs through his scenario. Maybe he puts compression on something. He’s like, “Okay, so from here, we gotta put compression on it. You can hear the esses on this person’s voice, you can hear the deep breaths that they were taking.” Compression basically makes everything go in the middle, so you can start hearing all the artifacts that were either in the high frequencies or all the artifacts that were in the low frequencies. He teaches you from those steps. Then, if you’re going to compress and you hear these things, it might be time to start EQing some of those things out to make it sound balanced. It’s just really cool to see the steps they go through. And every signal chain is different. There’s no such thing as having to put the compressor first, or the EQ first. It’s whatever your ear adjusts to first, but it’s interesting to learn from a professional engineer. It’s also free.
Read a lot. But most importantly, trust your ears. Trust your judgement.
A lot of times I’m listening and I’m looking, but most of the time I’m just listening. If I’m lifting the threshold on a compressor, it’s me listening more so than anything. I can close my eyes and pull the thresh and find where it needs to sit. That’s why I really don’t like visual EQs or compressors. You should be able to go to a board and figure it out just by listening.
Working within the stems and mix itself. I’m big on pocketing vocals and making them blend nicely within the beat—not too loud, not too low, the instruments aren’t clashing. To be honest, it’s mostly pianos. Pianos are my enemy. They are the mid-range eater. They eat every frequency in mid-range, and your vocals sit in that pocket as well.
EQ, compression, but you might have to sacrifice some things. You might have to pull the frequency out of the piano that gives it that edge, to make the vocals sit a little better.
Absolutely. Sometimes you have to fight it. There are times when I’ve taken some of the bass notes out of the piano so it doesn’t create any dissonance with the piano, but it’s a challenge. When you fall in love with the way something sounds, you don’t really want to start taking away the pieces that made it great. Sometimes you just have to sacrifice or edge something out with with EQ and make a little pocket for it to sit in. But yeah, pianos are my biggest enemy in mixing.
Yeah! That’s it. When you look at a piano on an EQ frequency, it’s literally the whole spectrum. It’s like, “How dare you take up so much space!” But it’s an amazing instrument when mixed and EQ’d properly.
It starts at the very beginning. When I’m producing as well. For me, when I’m mixing and mastering, I know where the vocals need to fit, if there’s too much instrumentation, if I need to start making cuts now. I try to mix the record as I’m producing, and then when we get it into Pro Tools as well, I’ll make sure everything is sitting in the right pocket before laying vocals down. I don’t want to fight EQing and compressing at the end. If I can get the production where it needs to be ahead of time, then the vocals will sit.
You start thinking, I have to plan everything out. It’s like making a meal from top to bottom. Maybe I have to wash the dishes first before I start doing this. It’s all about space. How much space can you take up in the spectrum? Where does everything fit, and where does everything sound good?
Putting Ozone on your mastering signal chain is amazing. Mostly I’ve been using it in production. A lot of the big-body instruments are the ones I’ve been using it for—pianos, strings. Everything that’s in the mid-range. There are lot of good things you can do even with some of the presets. I was with my buddy Arcitype at The Bridge Sound & Stage and he showed me Ozone first. He showed me how presets can get you close to where you want to go, and you can adjust from there. That’s what they’re for.
I went back at home, and me and my brother bought it. Since then, I’ve used it on Pro Tools sessions. It gives you a good gauge for mastering purposes. The Vintage presets on live instrumentation sound amazing, just when you click them. But you can do so many other things from the Dynamic EQ and everything else. I keep finding new things all the time.
Control Presence on the Dynamic EQ and Classic Master on the Maximizer. I also use those on my beats when I close out my mastering chain on every track.
It’s like when you’re making a sandwich and you never had the bread, and you finally get the slices of bread and it’s just like [Latrell clapped his hands together]. That’s what those plug-ins do for you. They give you a great idea of where you want to take your mix. Saves me a bunch of time.
Synthesizers are a world of their own. They can be anything. I’m a huge fan of synthesizers in general.
As far as mixing, I have a Korg Mini that I’ve made into a rack because I didn’t want to have the keys anymore. Putting those sounds into my work? Number one, they come in super hot. It doesn’t matter what you do—they’re massive. So the first thing that I’m doing is compressing.
I also didn’t realize how babied we were when we got into virtual instruments. Everything is set and clean, ready for you to write with, as opposed to loading a sound off a keyword. It might not be clean. You might have to do some cutoff, go into the envelopes and change some things. So when I started using the Korg, it changed my perspective on everything. A lot of it has been running lines, running it through something. I didn’t realize people could get certain sounds because they were running instruments through pedals, for example. It messed my whole mind up. I’ve been purchasing pedals recently, too, because I didn’t know you could run a bass through an envelope and then it would make a crazy funk sound. Now I’m trying to run my virtual instruments through a pedal and back in, or run it through the keyboard and change the process. Everybody tends to be in the box and take whatever the VST gives you, when you can change it and make something bigger.
When you get a virtual synth, a guitar will already be EQ’d to the point where you can just play it and it might fit in your mix, as opposed to potentially having to roll off some of the bottom or having some resonating notes from a live player. Overall volume, too. I can control on the VST. When I’m recording live, I gotta make sure everything is right before going into the mic. You can’t go in afterward and try to boost everything up; you’re going to hear all the artifacts in the room.
Mastering is labor-intensive thing because you’re beyond the mixing process. You’re shaping a file now. You’re shaping what it’s going to sound like ultimately. For me, that’s the scariest part. I could totally screw it up or sound brilliant.
For mixing, I absolutely love it. When you record something, some people may not understand your vision. When I’m recording, I know what I want to do with the song. If I send someone a rough mix, it’s not going to sound anywhere close to what they would hear it as. Maybe they didn’t understand that I wanted the vocals to sound distorted in the bridge, or they don’t understand the roles the delays and reverbs play. I’m always excited about the mixing process.
Find an album that impressed you and pick it apart as to why you were so impressed by it. A lot of times, it has something to do with the way it’s mixed. When you go back and listen to Beatles records, the records themselves are great, but the mix had a huge impact on what it sounds like. Then I’ll try and emulate the record. That’s how I learned how to mix. If you listen to a Dr. Dre record, Dr. Dre believes vocals should be in your face the whole time. He doesn’t believe in letting his vocals relax in the mix. They’re literally slap in your face at all times. As opposed to Kendrick Lamar’s new engineer, Derek Ali who is really creative when it comes to doing new things—panning entire sections of vocal tracks, feeling like it’s spinning inside of your headphones. Just try to emulate it. You’ll find something fun in mixing.
I fell in love with Kanye West’s “Flashing Lights,” because of that delay: “Flashing...lights...lights...lights.” That was amazing. I was like, how in the world did they do that?
Another great record that I think doesn’t get enough credit for blending synthesizers and live instrumentation is Michael Jackson’s “Baby Be Mine.” I know how much space synthesizers can take up, and to be able to give his vocals, synthesizers, drums, bass—all of these things—enough room to fit into a pocket? I listen to those records like every other day.
It depends. I do have 11 trusty, busty plug-ins that I will never run without. I feel like learning your tools is more important than having a billion of them. If you know how to use the screwdriver and saw that you have, you can do a great job with them. But you should always experiment. I’ll find new plug-ins that I like, and I’ll try to implement them.
I’ll do a lot of layering when it comes to vocals. I try to get people whom I work with in the habit of layering, because not everything is going to sound good with just one mono vocal. So I’ll say, “Hey, maybe you should stack this three times, pan one left, pan one right, and see how it sounds.” Some people’s voices aren’t that big. Some people can get away with one single track, and some can’t. With my new music, there’s a lot of harmonizing.
It’s a full project. [The project was was going into mixing the weekend after we spoke.] I’ve been annoying about it. When it’s something you create from the bottom up, you walk on eggshells every time you want to press send. A lot of mixing done on this one. I’m having someone else help mix it, too. I knew that I wanted to have more than just my ears on it, so we’re going to go from there.