Sign up for our newsletter and get tutorials and tips delivered to your inbox.
Today we’re going to explore width in the context of mastering. When we think about width in mastering, we think about it in much the same way as width in mixing. Width in mixing is all about creating a sense of space in the mix. Where are the instruments placed? Should anything be panned? Whatever decisions you made with stereo width while mixing to some extent determine the moves you make when mastering.
Learn more in this video, explore additional tips and tricks in the article below, and view our mid/side and stereo width resources here.
Jump to a section below:
Before we dive in, it’s worth reminding ourselves that the environment within which we make decisions about our mixing and mastering, what we’re hearing, depends so much on our playback system:
To hear how stereo width is used in mastering, try the A minus B technique when listening to some of your favorite tracks. Open up Ozone and place the Equalizer in mid/side mode to monitor just the side channel of the song. This allows you to see anything that was panned off-center. Sometimes you’ll clearly hear instruments panned wide left and right. It’s also not atypical to hear a percussion instrument, guitar, keyboard, or sometimes background vocals panned off to the side.
Below, hear the difference between a full track, the track with only panned info, and the track in mono.
After doing a few of these A minus B tests on your favorite tracks to get a better sense of how stereo width was used, you might have noticed that important mix elements are often panned to the center. In our example above, the vocals are somewhat spread out to the left and right sides, but the kick, snare, and bass are all dead center.
Listening to just the side channels of recordings can be helpful to more audibly understand producers’ characteristic practices and tricks, as many elements of the original tracks are more clearly revealed in the side channels. Listen to a few recordings this way to quickly identify and comprehend various producers' styles.
Depending on the genre, you’ll hear specific instruments appear in similar places in the stereo image. Tonal instruments that carry harmony are panned off-center and complement the featured instrument like a guitar, keyboard, or pad. Percussion instruments like hi-hats, shakers, and other rhythmic ear candy are often found on the edges of the stereo field.
Here is an example of what you might expect to see in terms of instrument placement within the stereo field:
By considering common instrument placement across a stereo image, we think harder about the moves we make when we adjust the stereo image.
Many instrument sounds contain a great deal of information in the midrange, so this frequency region should be treated with caution when working with your side channels. For example, if you bring up the side information in the mid-range frequency bands between 500–2000 Hz in an attempt to address the guitars and pads, you run the risk of throwing off the balance between those instruments and other midrange sources panned in the center, such as the lead vocal. This isn’t to say you should actively avoid touching midrange frequencies in the side channels; however, always be aware of the high potential of frequency masking in this range.
Because high frequency information, like percussion and hi-hats, is generally in less competition with the vocal, drums, and bass, it can handle larger changes than low frequency information without impacting the clarity of the center elements. If we boost the high end of the side signals, they’re brought forward.
Below is a handy musical frequency chart that shows where common instruments often fall in the frequency spectrum. Use this to help determine how you should structure your crossovers.
When you’re making a decision about width in the mix, ask yourself—why am I doing it? What do I hope to achieve from this decision? Remember, if you hear instruments panned wide and you start to increase the signal in the stereo width, those instruments will come up in the balance.
Sure this could be interesting, but there’s a flip side to this decision. If all of the important rhythm section instruments are panned right up the middle, and we increase the sense of width around the stereo image, do you think you’ll lose the focus on the center, on the groove?
Every adjustment you make, pay attention to what’s happening to the focused center of your stereo image. Understanding what is in the middle and what’s in the side helps us to know exactly what we’re doing when we adjust the sense of stereo width or the balance between mono/side signals.
One of the most important tools when considering adjustments to a stereo image is Imager’s vectorscope. Let’s explore how this works:
If you have a purely mono signal, you’ll see the meter on the right of the vectorscope go all the way up to the top, as it does in the image below when we switch between a stereo and mono image of the same signal. The more vertically oriented your stereo image appears in Imager, the stronger the mono component.
In the meter on the right, zero represents an equal distribution of energy across the left, right, and center channels. Let’s solo the side channels and see what happens to that meter:
Uh oh, it looks like the meter got pulled all the way down! Readings under zero indicate an out of phase signal, which could cause problems for listeners. What sort of problem does this create? Let’s solo the side channel again and collapse the mix to mono, to mimic someone who is listening on their phone from a low bandwidth feed.
The music disappears! So whenever you’re looking at your mix, if you see a stronger horizontal component and a weaker vertical component, you know to take a step back. Ask yourself, is there too much information panned here? Does the rhythm section need a boost?
While this is not always the case, the relationship between vertical and horizontal orientation within Imager can be a helpful visual clue to discover a potentially problematic relationship.
Many people talk a great deal about centering their bass, yet are often not conscious of the fact that when you’re mastering, you’re dealing with the sum of all tracks to stereo, which means your bass frequency range may have other information there besides your specific bass track. Let’s explore what this means.
Solo the energy below about 110 Hz only—the kick drum, the bass. From here, if you move the stereo width slider of your low end band to the bottom, you’ve created an entirely mono low end.
Check out the vectorscope above, and see the vertical orientation of the signal. If you return the slider back to zero, the orientation becomes more diffuse and includes information on the phase difference between the two channels, as you can see below:
There is certainly a difference when you take the low end and sum to mono, or restore it back to its original orientation. Yet you’ll also notice there was no kick drum and no bass. If you look at your favorite mixes in most genres that include drums and bass, you’ll discover that the mono signal contains all the kick and bass energy. Because of this, mono-ing the bass doesn’t help to tighten it or bring the elements into focus because they’re already incredibly focused, they’re panned right up the center.
All you do by centering the bass is bring the low frequency energy of other tracks like guitars, keys, pads, even male vocals or background vocals with low pitches, up to center. This may or may not be a good thing. In doing so, you’ll lose a sense of depth and space in the low/mid range, and gain a bit of clarity because you’ve removed some low energy off the side or the different channel and brought it to the center.
If there’s something you need to do to adjust the stereo image, if it feels too narrow, it’s important to consider where in your mastering signal chain you should make this correction.
A good rule of thumb is to make the correction first before you do any other signal processing, because your stereo image heavily impacts your chain’s processing downstream. You want the stereo field to be appropriately sized prior to using other modules. If the stereo image sounds good, but you’d like to create a little more width or add more energy out on the sides, it’s best to leave that until your very last or next-to-last module prior to your limiter.
As we discussed above, different frequency ranges and the instruments that live within them require different approaches. Because of this, the best course of action is often to make different decisions in each band.
You might decide to add a little stereo width in the mid-range, but not so much so as to create the competition between elements we spoke of above.
You might add a little more width to the high frequency information, or you could even create a fourth band to handle the ultra high frequencies, perhaps the high end of your hi-hat is missing most of its body.
You’ll notice in Imager that where you’re creating a sense of width, you also get added excitement in the high end. Everything feels a bit wider and broader, but the warmth of the lead vocal has been pushed back as a result. And that’s a trade-off you’ll need to consider and manage when you’re making decisions about how wide to make a track.
Today we’ve presented some guidelines and ways to consider managing mono vs stereo that relate back to common decisions we all make in music production. So don’t be afraid to experiment! Ultimately, we’re engaging in a creative activity, and it’s important to be able to try new sonic avenues to see what works and what doesn’t. Pan the kick to one side and the bass to another, widen the low end and see what happens.
But always remember to keep one thing in mind: your listener. We’ve created a playlist of tracks that have taken novel approaches to stereo imaging. Some are conventional, some are downright wacky, and all are there to help stimulate some creative ideas for you. Have at it:
Articles related to stereo width and mid/side:
Stereo width in mixing: