Explore the first season Are You Listening? with Jonathan Wyner:
- Audio Mastering Basics
- Your Listening Environment
- Compression in Mastering
- Limiting in Mastering
- EQ in Mastering
- Mixing Meets Mastering
And get caught up on the latest episodes of season two:
Never Miss an Article!
Sign up for our newsletter and get tutorials and tips delivered to your inbox.
People talk a lot about mastering for streaming platforms, but you have to wonder—why? What’s so different about mastering for a streaming service versus any other format? Today we’re going to cover how to approach mastering for streaming services and the two main differentiators between mastering for streaming services and mastering for other formats: level and loudness.
Lossy codecs (MP3, AAC, Ogg Vorbis, etc.)
Most streaming services employ lossy codecs, a type of file formatting that discards part of the musical information in a track to make files smaller, which makes it perfect for streaming and sharing over all sorts of bandwidths. These codecs do, however, often result in less high-end information on a spectrogram or spectrum analyzer.
Because we know our masters will be turned into lossy files, level becomes especially important.
Level (peak normalized, loudness normalized)
The other issue has to do with level and whether or not the listener is going to play back the audio peak normalized or loudness normalized.
Since the earliest recorded audio, people have owned music, be it a physical object or a hosted file on a hard drive that lives locally on their machine. This audio exists in whatever resolution and at whatever level the owner bought it—very different from a streaming service. Nobody is going to go into your house and change the level or the resolution of your turntable. You've got the audio, you own it, it lives in your house.
Streaming services, on the other hand, are more like broadcast radio where audio is uploaded to a service and then distributed from there. Whether you're playing back at home, through a laptop or a desktop machine, through a media player that's integrated into a home service or through an app, through something that might feed your television set or a soundbar or you're listening on your phone or your tablet—each one of those streams might be slightly different and each service might handle your audio slightly differently.
If you're out in the wild listening on your phone, the streaming service might reduce the bandwidth and you may hear a slightly modified version of the audio. In fact, if you have a really poor cell connection, in some cases the service might even throttle the audio down to a mono playback of your audio!
Can’t we just master for every potential listening situation?
It's not as if we master something for every streaming service and every eventuality—that would be crazy! We can't anticipate what's going to happen in every instance. Not being able to do so necessitates that we at least think about our mastering work so that the audio doesn’t completely fall apart in every playback situation. This is one of the reasons that we make sure we pay attention to mono compatibility for instance. We can’t assume people aren’t going to listen to our mixes in mono, so we must account for this in the mastering process and test our mix’s playback in mono.
How do you set levels so that audio sounds as good as possible on a streaming service?
Above, we play a track and set the destination to Streaming. We’ll disregard some other options here to focus on this one decision point. Master Assistant will then consider tonal balance, musical and technical dynamics, and the overall level setting of the track.
Let’s take a look at the Maximizer, our limiter. Typically the limiter is the last thing Master Assistant will place in your chain. You'll notice in the output meter that it sets your peak level to -1 dBFS. The reason for that choice has to do with what happens to the audio when it gets encoded into a lossy file format.
Let’s listen to what a lossy MP3 file at 192 kbps sounds like using Codec Preview in Ozone, and then just the solo’d artifacts.
When you use Codec Preview, you may notice the peak level on your output meter is somewhat higher than it was before. This is a natural byproduct of taking a WAV or AIF file and turning it into a lossy file. The peak level ends up being different at the output than it is in the lossless version. You can't avoid it, but we can prepare for it by lowering the level of our master so that when it gets turned into a lossy file, it won't overdrive the output.
If you're working on a track that's set to a very high RMS level, and there's a low crest factor between peak and average, you may notice that you still get some level overage, even if you set the output to -1. Our goal here is not to completely eliminate distortion in the lossy file, but at least to reduce it. This is one of the implications of mastering for streaming services.
Loudness in mastering for streaming services
Here’s where things get somewhat controversial: let’s talk loudness. In a perfect world, you would create a master for playback—audio presented to a listener according to a peak-normalized paradigm set for the loudness-normalized playback of whatever streaming service someone is listening to.
Some streaming services offer the ability to turn loudness normalization on or off. The implications of this are maddening because it means that you can't know for sure whether a listener is going to go from your song to the next song they listen to on a service that may or may not match levels.
Here you might be asking some of the questions below:
- Do I have to push the level up really, really high so that the track sounds as loud as the next thing that's going to play?
- If I push the level up really high and the streaming service then turns it down, is it going to sound worse?
- Have I pushed the level so high that it ends up damaging the audio, and then, when it gets turned down, that damage becomes even more apparent than it might have been otherwise?
There's a problem here because many people will use the idea of loudness normalization as an argument for not pushing level at all. And I think that's a mistake. If we use any standard playback level as an arbitrary way of defining the artistry of our work, we run the risk of making mistakes, or at least not making tracks sound as good as we can and work for the artist as well as they possibly can work.
For my money, what I ultimately prefer is to make a track sound as good as possible at as high a level as possible. That's an abstracted idea because there's no single number that I'll use or should be using. It varies with genre, with the artist and what the artist needs. My goal as a mastering engineer is to ensure I’m making something sound as good as it can so that it works well in as many different playback paradigms as possible.
In the next episode, we’ll dive more deeply into loudness in mastering. If you have any questions or comments, please leave them in the survey below. We always want to hear your thoughts.