The aim of this article is to familiarize you with five of the most common vocal effects, how they work, and what they sound like in action. If you’re a singer, you’ll definitely hear them when working in the studio. And if you’ve just started home recording, you will, if you haven’t already, come across them in your DAW.
The holy grail of vocal effects, reverb is used on just about all vocal tracks to create a sense of space. We don’t usually think about it in the moment, but we hear reverb around us all the time.
As defined in our “Reflecting on Reverb” blog, reverb is “a series of many closely-spaced sound reflections that we hear as a continuous sound.” Put simply, reverb is the sound of a space.
Reverb can be captured naturally while recording a vocal. It can also be added in post-recording using DAW plug-ins that alter or enhance the qualities of a room. DIY bedroom vocals can be made to sound like they were captured in a large church or stadium. We’re so accustomed to reverb that a vocal can actually sound pretty strange without it.
Listen to a dry vocal recording, then with reverb.
Boiled down to basics, compression is automatic volume control. It helps even out vocal recordings so they don’t get lost in the mix or overpower other song elements. Most vocal recordings contain volume changes—a singer may start softly but belt out the chorus. With compression, you can minimize these peaks and dips.
For lead vocals, compression is used to lower loud transients that clip or annoy. This is done by setting a threshold—all words that are below the threshold will remain untouched, and everything above it will be slightly reduced for that moment.
Even if a singer is fairly consistent during the recording process, producers often add compression to tighten up the vocal mix. For more detailed vocal compression tips, read on here.
Listen to a dry vocal recording, then with light compression to smooth out some of the peaks. For example, the Ouuu swell that happens at the end of “you” is less noticeable once compressed.
Now, with a white noise layer.
A de-esser is a type of compression tuned to be sensitive to noisy, high-frequency vocal sounds called sibilance. The main culprits are Sss sounds, which is where the de-esser gets its name, but the letters F, X, and T can introduce issues as well.
Words with accentuated sibilances scratch listeners’ ears, especially if they use headphones. They are generally thought of as simply annoying, but can also cause distortion. If you read “Sing a Song of Sixpence” out-loud you will get a good idea of how sibilances can be a problem when recording.
Microphones that capture a lot high end are typically prone to sibilance issues. Too much compression can lead to these nasty whistling sounds too, so always place your de-esser before a compressor in your effects chain.
In this new recording, there is a particularly harsh “S” at the beginning of the word “sleep.” Using the De-ess module in RX, the “S” sound can be attenuated. For the purpose of demonstration, the result is more extreme than it should be. The goal is not to remove all sibilance, but to tame them for a more pleasant, controlled vocal.
Listen to the vocal dry, then with the de-esser, and finally, the “ess output,” which is what RX cuts.
I like to follow a “what happens when I do this?” approach to music production. I can get great results from this approach using VocalSynth 2. It’s a wild synthesizer-based effects rack for vocals, but it works just as well with non-vocal content.
So, add an instance of VocalSynth to the white noise and listen to the new textures that are brought out. VocalSynth succeeds at enhancing high end frequencies so they sound detailed and expressive. This works swimmingly with white noise.
EQ (aka equalization) is used to sculpt the frequency content of vocals, instruments, and effects so they sound good together in a mix.
When referencing frequency content, producers and engineers use three main categories: lows, mids, and highs. In music, lows are sub and bass frequencies, and highs are the harmonics that give clarity and space to a mix. Vocals fall into the mid range and can spill into the highs. Male vocals are usually at the bottom end of this spectrum and female vocals are up top.
Most other instruments occupy the mid range too. EQ is a valuable tool because it allows you to cut frequencies that occupy the same space as vocals, lending to a sense of balance in the final mix.
Listen to EQ in action. First, removing low frequencies, then the mids, then the highs.
Pitch correction works by reading the pitch of a vocal and correcting the off-key notes. Over the last decade, it’s become the go-to effect for vocal mixers, primarily in hip-hop and R&B, but also in pop and electronic music.
If a singer does a great take with one sour note, pitch correction can fix it, without the need for a re-do. But pitch correction isn’t just a crutch for bungled vocals—it can be used as a unique modulation and distortion effect. You can hear a range of pitch correction used in music by Bon Iver, Frank Ocean, and Post Malone.
Listen to this vocal get repitched two semitones down with Nectar. Dry, and then pitched.
VocalSynth offers re-pitching capabilities too, along with a range of transformative synthesizer-based vocal effects. Listen to preset Time it Right.
The five effects discussed in this article are used across all genres, styles, and BPMs. Learning how vocal effects work is the key to creating your unique vocal sound. With an understanding of the basics, you will be able to make quicker studio decisions and feel confident in them.