Learn Music and Audio Production | iZotope Tips and Tutorials
How to Master for Streaming Platforms: Normalization, LUFS, and Loudness
I think it’s safe to say we’re officially in the age of streaming—in fact, you could probably argue that we have been since about 2015. While it’s true that physical formats like vinyl—which has seen nearly exponential growth since 2007—and CD—which made a modest comeback in 2021—have seen increased sales in recent years, they still pale in comparison to streaming, which in 2021 enjoyed a commanding 83% market share!
With that in mind, understanding how to master a song for streaming is as important now as it’s ever been because each platform has loudness standards and specifications to adhere to if you want your music heard as intended. So without further ado, let's dive into streaming platform specifications for loudness, level, and normalization.
Learn how to master for streaming platforms:
Loudness, LUFS, and normalization
One of the core questions we’ll need to address is, “how loud should I master?” To answer this though, we’ll need to have a good understanding of loudness, LUFS measurements, and the concept of normalization. Let’s start by bringing some definition to those terms.
Loudness seems like it ought to be a simple enough concept, but if we pry a little we can uncover some of its complexities. Is loudness intrinsic to a file? Or is it dependent on the sound pressure level—SPL—in the air? Where do user volume controls factor in, and what about tonal balance and the personal hearing traits of the listener? You can read more about some of these complexities about loudness in this article, but for our discussion here, we’ll think about loudness as it relates to so-called “loudness meters.”
Loudness meters, like those found in the Loudness panel of Insight, are a modern way of measuring perceived loudness in a digital environment, and the unit they measure is known as LUFS—loudness units, full scale.
Unlike the idea of “loudness,” LUFS meters and their operation are very well defined. However, that definition is still quite complex. We don’t need to get into all the nuts and bolts here, but let’s take a high level look at how LUFS meters work, and what the different measurements they show us indicate.
The first thing to understand about LUFS measurements is that they attempt to account for the fact that humans don’t perceive all frequencies as equally loud for a given SPL. We’ve talked about this in more detail in this article about monitor gain, and to emulate this, LUFS meters gently roll off incoming signals below about 100 Hz, while accentuating them above about 2 kHz. This is known as “K-weighting” and effectively means that they’re less sensitive to low frequencies, and more sensitive to higher frequencies. Read more about the technical details of LUFS.
The next thing to understand is that most meters use “EBU Mode” to show five different loudness metrics, as shown above in Insight. Let’s walk through them to quickly explain what each one shows us.
Momentary and Short Term measurements are both essentially RMS measurements. They are K-weighted, as described above, and Momentary uses a 400 ms time-scale, while Short Term uses a 3 second time-scale.
The Integrated metric is essentially a K-weighted measurement of a whole song, built up from the Momentary measurements. Additionally, there is a measurement “gate.” This means that very quiet signals—below -70 LUFS—do not contribute to the loudness measurement, and once that threshold is crossed, signals 10 dB below the measurement also don’t count.
In other words, if the integrated measurement is -12 LUFS, portions of the signal below -22 LUFS will no longer contribute to the loudness measurement. When we talk about normalization, it will be this integrated measurement that we’re interested in.
Loudness Range—or LRA—is quite complex, but essentially you can think of it as a measure of musical dynamics. A classical recording, with wide variations between soft, pianissimo sections, and loud fortissimo sections, could have a very high LRA—perhaps 20 dB or more. Meanwhile, a metal song that’s full-on all the way through, might have an LRA measurement of just 3–4 dB.
Finally, True Peak measurements are meant to be an improvement on traditional sample peak measurements. They use oversampling to attempt to show the actual peak level that will come out of a digital to analog converter—or DAC—which can help avoid clipping.
Normalization is the process of setting some particular metric of a track—typically peak level or integrated loudness—to a specific, predetermined level. In the old days this usually meant setting the highest peak level in a file to 0 dBFS. In practice, the predetermined level doesn’t have to be 0 dBFS, it can be any value we want, and in fact that’s exactly how the Normalize module in RX works. In either case, this is what’s known as “peak normalization.”
If your goal is to use as much headroom as possible without clipping, then peak normalization is fine. However, if your goal is to make two songs sound roughly the same in terms of loudness, “loudness normalization” is the key. There are multiple ways to accomplish this, but most frequently the integrated loudness of a song is measured, and a gain offset is applied to make the measured value match the predetermined one. This is the way the Loudness Control module in RX works, and it’s also the technique most streaming services use. More on this in a bit.
Creating a streaming master
With an understanding of loudness, LUFS, and normalization, let’s dig into the details of creating a master for streaming platforms. If you’re looking for some tips to help you get started on your mastering journey, please check out the Audio Mastering Tips & Tutorials section of the website. Here, we’re going to look at the specific issues that affect streaming.
One of the most frequent questions I hear on this topic is, “Should I master to -14 LUFS?” The answer may surprise you: no! Or at least, not necessarily. It’s not hard to see why people would think this though. If streaming services like Spotify and Amazon normalize music to -14 LUFS—we’ll get into the details of specific levels shortly—then why not just use that as a target when mastering? There are a few problems with this.
The goal of normalization
The goal of loudness normalization was never to force, or even encourage, mastering engineers to work toward a specific level. Loudness normalization is purely for the benefit of the end-user. It exists so that when an end-user is listening to program material from a variety of sources—like a playlist—they don’t have to constantly reach to adjust their volume control. That’s it.
Once you think of it like this, you may realize that it actually gives you a lot of freedom. If you want to master your music close to -14 LUFS, making use of the ample headroom for dynamic impact, you’re free to do so, knowing it may just get turned down a few dB. If, on the other hand, you want the denser, more compact sound that often comes from a loud master, you’re free to do that too, it will just get turned down more—and that’s not necessarily a bad thing.
What’s more, there’s another reason you shouldn’t worry about making your master match any given platform’s normalization level.
Reference levels can always change
While there’s been a convergence toward -14 LUFS in the last few years, there are still platforms which use different reference levels. For example, Apple Music uses -16 LUFS—most of the time—Deezer uses -15, and Pandora doesn’t actually use LUFS.
In fact, Spotify’s reference level is user selectable between -23, -14, and -11 LUFS! To muddy the waters further, there’s nothing to prevent any of the streaming services from changing either their reference level, normalization method, or both down the road.
So what’s an engineer to do about mastering for streaming platforms?
With all this in mind, here’s my best advice. Make a track sound as good as possible at as high a level as it can handle before losing impact. That's an abstracted idea because there's no single number that I can realistically use. It varies by genre, song, and artist’s intention. My goal as a mastering engineer is to ensure I’m making something sound as good as it can so that it works well in as many different playback paradigms as possible.
Finally, there are two other factors to bear in mind when creating a streaming master: peak level and album balances.
This article references a previous version of Ozone. Learn about Ozone 10 and its groundbreaking new features including the new AI-powered Master Assistant, adaptive mastering EQ with the Stabilizer module, and more.
The second half of 2021 saw a bit of a sea change, with nearly all the major streaming platforms offering lossless streaming at no extra cost. The notable exceptions are Spotify and SoundCloud—Spotify has announced Spotify HiFi, although it’s not clear when it will be available. This leaves us at a bit of a crossroads.
When lossless streaming was the exception rather than the norm, it was important to leave a bit of peak headroom to avoid distortion during the encode and decode process of lossy streaming. A good rule of thumb was to leave at least 1 dB True Peak headroom. Sometimes though, more could sound better, especially with louder material or lower bitrates.
A good way to audition this is by using the Codec Preview module in Ozone. Spotify and SoundCloud don’t always use MP3 or AAC, but those two codecs can certainly give you a good idea of where others might overshoot.
When you use Codec Preview, you may notice the peak level on your output meter is somewhat higher than it was before. This is a natural byproduct of taking a WAV or AIF file and turning it into a lossy file. The peak level ends up being different at the output than it is in the lossless version. You can't avoid it, but you can prepare for it by lowering the level of your master so that when it gets turned into a lossy file, it won't overdrive the output.
With lossless streaming, however, this isn’t a concern. As long as your True Peak levels stay below -0.3 dBTP or so, you’ll be fine. Which path you choose is very much up to you. You can make use of the extra level with the knowledge that listeners playing lossy streams may suffer a little extra distortion, or play it safe and cater to the lower common denominator.
Personally, I like to make the best of both and use a non-True Peak limiter with a ceiling at -1 dBFS, followed by a True Peak limiter—Like the one in Ozone—set to -0.3 dBTP.
Another question that sometimes comes up is, “Should I master all the songs on my album to the same level?” Again, it’s not hard to understand why people might think this. If streaming platforms are turning your songs down to their reference level, and different songs on your album are at different levels, doesn’t that mean they'll get turned down different amounts, thereby changing your album balance?
Thankfully, the answer here is also: mostly, no. Amazon, Deezer, Pandora, and Youtube use track normalization exclusively, meaning all tracks are adjusted to the reference level. For platforms like these where users predominantly listen to singles or radio type streams, this makes some sense. However, these platforms also have a relatively smaller market share.
Apple Music and Spotify, on the other hand, both have an album normalization mode. The technique employed for album normalization is to use either the level of the loudest song on an album (or EP), or the average level of the entire album, and set that equal to the platform reference level. Then the same gain offset is applied to all other songs on the album. For Spotify and Apple Music this kicks in when two or more songs from an album are played consecutively.
Interestingly, Tidal has elected to use album normalization for all songs, even when they’re in a playlist. This method was implemented after Eelco Grimm published research on the matter in 2017, presenting strong evidence that album normalization is preferred for both album and playlist listening by a majority of users. If we analyze this, it points to another important fact: we shouldn’t let normalization reference levels dictate how we level songs on an album, but rather let the artistic intent and natural flow of the music be our guide.
Loudness specifications by streaming platform
After all that, you may wonder why we would care about a particular platform’s specifications. After all, I am saying that we really shouldn’t worry about these too much, and just make the music sound the best it can. However, part of our job as mastering engineers is to be informed.
Following are some specifications by platform, to help you understand the different variables we’re all dealing with. Thanks to Ian Shepherd and Ian Kerr at MeterPlugs for helping compile a lot of this information!
Apple Music uses a reference level of -16 LUFS, enables normalization on new installations, will turn quieter songs up only as much as peak levels allow, never uses limiting, and allows for both track and album normalization depending on whether a playlist or album is being played.
The caveat here is that older versions of macOS and iOS may still be using Sound Check—a non-LUFS based normalization method—and didn't always enable it by default.
Spotify uses a default reference level of -14 LUFS, but has additional user selectable levels of -23 and -11 LUFS. Normalization is enabled by default on new installations, and quieter songs will be turned up only as much as peak levels allow for the -23 and -14 LUFS settings. Limiting will be used for the -11 LUFS setting, however more than 87% of Spotify users don’t change the default setting. Spotify also allows for both track and album normalization depending on whether a playlist or album is being played.
YouTube uses a reference level of -14 LUFS, and normalization is always enabled. It will not turn quieter songs up, never uses limiting, and uses track normalization exclusively.
SoundCloud does not use normalization, and also does not offer lossless streams. Additionally, artists typically upload directly to SoundCloud rather than using an aggregator. For these reasons, you may actually want to consider a separate master for SoundCloud, although you certainly don’t need to.
Read more about how to optimize your master for SoundCloud.
Amazon Music, Tidal, and more
Amazon Music and Tidal both use -14 LUFS, while Deezer uses -15 LUFS, and Pandora is close to -14, but doesn’t actually use LUFS. Tidal and Amazon have normalization on by default, while Deezer and Pandora don’t allow it to be turned off. Amazon, Pandora and Deezer use only track normalization, while Tidal uses only album normalization. Only Pandora will turn quieter songs up, and none of them will use limiting.
The official AES recommendation
On top of all this, it should be noted that the Audio Engineering Society has made a set of recommendations in the form of AESTD1008. It’s a comprehensive document, but here are some of the highlights:
- Use album normalization whenever possible, even for playlists
- Normalize speech to -18 LUFS
- Normalize music to -16 LUFS. If using album normalization, normalize the loudest track from the album to -14 LUFS
Checking the specs of your master
If you want to check the specs of your master to see how it will be handled you can use the Loudness panel in Insight, as shown above. If you do this, you’ll need to play the song from start to finish without interruption. This might not be a bad idea anyway though. After all, you’re about to release it to the world, so this is your last chance to make changes!
This article references a previous version of RX. Learn about RX 10 and its powerful new features like Adaptive Dynamic Mode in RX De-hum, improved Spectral Recovery, the new Repair Assistant, and more.
If you’ve already done that and you’re looking for a faster method to measure your specs, you can also use the “Waveform Statistics” window in RX. You can access this under the “Window” menu, or using the Option+D shortcut.
Start mastering for Spotify, Apple Music, and more
I get it, it’s a lot to absorb. I know I’ve not given many concrete numbers or rules of thumb, but hopefully, you see that it’s because there are a lot of variables that have the potential to change at any time. Still, since you’ve made it this far, let me share with you a few personal axioms that guide my day to day work:
First and foremost, do what serves the music. It is a real shame to try to force a piece of music to conform to an arbitrary and temporary standard if it is not in the best interest of the song or album.
An integrated level of roughly -12 LUFS, with peaks no higher than -1 dBTP, and a max short-term level of no more than -10 or -9 LUFS is likely to get turned down at least a little on all the major streaming platforms—at least for now. This does not mean all songs need to be exactly this loud (see next point).
When leveling an album, don’t worry if some songs are below a platform’s reference level. Moreover, don’t push the level of the whole album higher, sacrificing the dynamics of the loudest song(s), in an effort to get the softer songs closer to the reference level.
A song with substantial differences between soft and loud passages may sound quieter than expected. If this is of concern and it is not detrimental to the music, subtle level automation or judicious upward compression can help even out these dynamic changes without unnecessary reduction of the crest factor.
Hopefully, these four parting tips, along with a better understanding of the forces at work on loudness normalized streaming platforms, will better equip you to make masters that translate well not only today but for years to come.
You can start mastering for streaming platforms with iZotope mastering plug-ins like RX and Ozone that are included in Music Production Suite.