Tonal Balance Control is now available for separate purchase. Finalize and fine-tune your master using target curves for different genres. Learn more here.
Tonal balance refers to the distribution of energy across the audio spectrum. Whether you’re an artist or audio engineer, understanding and managing tonal balance helps you achieve a desired effect—anything from a solo piano, jazz trio, dubstep track, or 12-piece mariachi band. You can visualize it as a signal’s frequency curve over time.
However, the term “balance” is one that is frequently bandied about in discussions regarding the quality of audio recordings and mixes. It’s not hard to conjure up memories of statements like “That mix is on point—super balanced,” or “You gotta use this old mic on piano. It’s smooth and balanced.”
It seems that everybody assumes that everybody else knows what they mean by “balanced,” even though they give no further explanation. What's frustrating is that its definition is not only vague, but also variable. It’s subjective and contextual. The interpretation of tonal balance may differ between avid listeners, artists, and recording, mixing, and mastering engineers, their relationship to the music, as well as what they like.
What follows are perspectives from a variety of players in the music industry, intended to help you better understand the meaning of tonal balance.
Ask musicians about balanced tone and you'll usually get references to their musical idols. Something like, “Man, I saw Famousface McTwiddleystrings live in ‘89. I’m telling you, the guitar tone was unreal!”
Unsurprisingly, two guitarists may have drastically different thoughts on what is “right.” They’re thinking about the ideal sound for their instruments in the style of music they play. For example, one guitarist may view his or her idea of a balanced guitar tone as one that sounds full on its own—something with enough lows, mids, and highs to seem impactful and complete by itself.
The image below shows differences between a DI’d electric guitar (white line) and the same signal with amp simulation added (blue line). What’s immediately obvious is that the signal with amp simulation has less low end energy below about 60 Hz, but significantly more energy in the 1 to 16 kHz range. In addition, there are more subtle differences in other areas, such as the low mids and ultra-high frequencies.
Frequency curves of an electric guitar DI (white line) vs electric guitar DI with amp simulation (blue line)
Naturally, if a guitarist favored the sound produced by the amp simulation, he or she would want to pursue gear and sonic changes that grant a similarly big tone. It may be a simplistic perspective, but it makes good sense when a guitar is the only thing being considered.
However, it’s not so simple when other instruments are a factor. What about people who record musicians and have to capture each of their tones? Does the definition of tonal balance change for them?
Ask recording engineers about balanced tone and expect comments about gear such as microphones, preamps, EQs, and compressors. Understandably so! For them, tone is dictated by the entire recording chain: the source (player plus instrument), the room, mics, preamps, processors, and A/D converters. They have to pick the right combination of all those elements to create the desired sound.
What exactly is this desired sound? From the perspective of the engineer, it’s a compromise between a musician’s original tone and what translates through the monitors. It’s not enough for the instrument to sound great in the room; upon playback, it should sound good not just by itself, but also with the other recorded instruments. That’s where matters get tricky.
In deciding how to achieve tonal balance, the engineer also has to consider the arrangement (when the instruments are played) and dynamics (the varying volume). If the bass guitar plays a low note each time the kick is hit, their combined levels and frequencies need to cooperate to produce the intended low-end energy. If the bass guitar plays a high note each time the kick is hit and a low note after the each kick, then maybe the kick and bass should have independently strong low end. If the bass guitar is sometimes too quiet and other times too loud, its wayward dynamics can disturb the desired balance of low frequencies. On the other hand, if instruments never change in volume, the various sections of the song may all have the same loudness. That wouldn’t be exciting, now would it?
However, be aware: the artist’s vision for a song can significantly change what the engineer feels is balanced for each instrument. What sounds balanced for one artist may sound entirely wrong for another. Also, divergent genres routinely demand variant tones. For example, a balanced drum sound for a metal band will be wildly different than a balanced drum sound for a blues band. In the image below, notice the different frequency curves between a kick drum sound for a blues band (white line) and one for a metal band (blue line).
Frequency curves of a blues kick drum (white line) vs. a metal kick drum (blue line)
One of the challenges unique to the recording engineer is that he or she often doesn’t know the extent of the instrumentation for each song. Here’s a not-so-crazy scenario. A three-piece band goes to a studio and records some songs. The engineer achieves a tone that seems balanced when all three instruments are played back. Later, the band is struck by inspiration and decides that more instruments should be added. In adding more instruments, the previously attained tonal balance loses its equilibrium.
Understanding that the instrumentation may change, recording engineers are well-served by being willing to modify their approach to suit the known arrangement. Meaning that the focus will typically be on capturing each source in a way that is complementary to the original sound, the artist’s vision, and what instruments currently exist in the production. After all the recording is done, a fresh challenge awaits. It is the job of the mix engineer to take the numerous tracks in a song and make them fit together. That leads us to the mixer’s mindset.
The task of fitting a song’s tracks together requires a delicate “balance.” As a result, the concept of tonal balance for a mixing engineer is primarily about what results from the sum of all tracks. It’s not that it doesn’t matter what each instrument sounds like on its own; it’s just that what matters most is how everything sounds together.
With that in mind, mix engineers often have to make drastic tonal changes on a track-by-track basis to acquire the desired overall sound. For example, if the piano, bass guitar, and kick drum in a song each produce weighty low end in a similar frequency range, they fight for attention in that range while simultaneously yielding an overabundance of low frequencies in that territory. The engineer may do substantial equalization such as boosts and cuts in slightly different low frequency zones to allow each (piano, bass, and kick) to be individually distinguished while simultaneously creating an overall low end that is more spread out than before.
It’s challenging to carve and sculpt each track so that it sounds best not when it is soloed, but instead when it is mixed in with everything else. For the mixer, the big question is always, “Does it sound good when all the tracks are playing together?” As was the case for recording engineers, mixing engineers must factor in the arrangement and dynamics. The chorus really needs to hit hard? Well that’s not going to happen if the preceding verse sounds just full and loud as the chorus. Yeah, maybe the tonal balance needs to shift between song sections in order to achieve the right overall balance.
After a song has been mixed to a satisfactory degree, it may be tempting to use it as a reference when mixing other songs. Artists and engineers are constantly comparing current mixes to finished mixes. Doing so is both valuable and dangerous. It’s valuable to have something of greatness as a point of reference. Hearing a great mix can help steer you in the right direction and illuminate mistakes that you’re making. It’s common to reference other mixes during and after the development of the current one. Wouldn’t it be a lot easier to draw a platypus if you could reference a picture of one as you’re doing it? I rest my case. Back to music, though. Although referencing other mixes is often helpful, it can be counterproductive and even dangerous if the reference isn’t relevant to what is being mixed.
Let’s say that an artist requests his song sound like a popular tune. If the two songs have different instruments, there will be different amounts of frequencies in various areas. Even if the same instruments were used, different notes (in Song 1 vs Song 2) produce different frequencies and affect overall balance.
Imagine two songs with the same musicians and same instruments recorded back to back. In Song 1, the guitarist, keyboardist, and bassist play notes mostly in low registers, producing a lot of low-end energy. In Song 2, they play notes mostly in mid to high registers, producing less low-end energy and more mid- and high-end energy. Not only will the two songs have dissimilar tonal balances, it would be a mistake to make them identical. The same applies to songs in different genres. The image below shows the frequency curves of a folk song (white line) and a rap song (blue line). Notice that the rap song has more energy in the low end and high end, but less in the low mids.
Frequency curves of a folk song (white line) vs. rap song (blue line)
A mixing engineer’s definition of tonal balance has to be extremely flexible and context-dependent. Mixing several songs for a single artist can yield intentionally disparate balances. After songs have been mixed, they‘ll usually be passed along to a mastering engineer. Of course, the natural next question is, “What about the mastering engineer’s interpretation of tonal balance?”
By the time a song gets to the mastering stage, it’s often just a two-track stereo mix, though sometimes it will be split out in stereo stems (recorded submixes of the various elements of a song such as drums, bass, guitars, vocals, etc). So, in terms of balance, what is the mastering engineer’s focus?
The first goal is to enhance or reveal what the mixer was trying to establish. Due to a poorly treated control room or hyped monitors, the mix engineer may not have realized that the mix was actually deficient in the low end. The mastering engineer, often equipped with better monitors in a finely-tuned room, may hear this deficiency and correct it with EQ on the stereo mix. He or she tries to maintain the intended balance of the original mix, while making changes that will help it translate better to a variety of playback systems. If a mix’s low-end deficiency goes unresolved, the end listener will likely be disappointed by the lack of bass frequencies. It would be a shame for a mix to only sound right through the original monitors and in the original room in which it was mixed. That’s no good for the rest of the world.
The second goal is to achieve consistency or appropriate differences within and between songs. If 12 songs for an album are sent to a mastering engineer, their order or sequence will be included. Based upon that song order, there will be a natural flow throughout the course of the album. Maybe the first three songs are upbeat and energetic, but the fourth song is downtempo ballad. Considering this, the mastering engineer may use EQ and compression to make the first three songs crisp, punchy, and loud, but make the fourth song a bit quieter and mellower. Within any given song, mastering engineers also try to achieve a natural dynamic flow. You know the old saying, “Don’t bore us, get to the chorus?” Well, wouldn’t it be a big letdown if that all-important chorus just flatlined because of poorly-used dynamics?
The mastering engineer tries to make each song sound its best and make all songs in an album flow. What I say next should not surprise you. Tonal balance for a mastering engineer changes depending on the genre, the artist, and the instrumentation. It makes sense why you can’t use the same EQ and compression settings on every song. The image below shows the frequency curves for two different songs on the same album of a rap artist. In comparing them, notice that there are peaks and dips in different places. That’s not weird; that’s normal.
Frequency curves of the same artist’s rap song 1 (white line) vs song 2 (blue line)
By now, you should have a better idea of what tonal balance means to others. Hopefully, you're also coming to terms with what it means to you. Never forget your ideal of a balanced sound and never forget that it is subject to change.
Get top stories of the week and special discount offers right in your inbox. You can unsubscribe at any time.
Copyright © 2001–2020 iZotope, Inc. All rights reserved.