Sign up for our newsletter and get tutorials and tips delivered to your inbox.
Phase relationships between similar signals are a source of endless fascination in music production. Without phase manipulation, we wouldn’t have stereo, flanging, reverb, and other interesting audio phenomena.
But phase relationships can also present problems. The most common problems are signals being completely out of phase (sometimes called reversed polarity), or resulting in unpleasant changes in sounds resulting from phasey, or comb-filtered audio.
What is phasing in audio?
Phasing can be defined as timing differences when combining identical (or nearly identical) signals. This can be a result of static delay between the signals, also called comb filtering, and can also come from extreme boosts when using non-linear phase EQs.
In a music production context, phasing has noticeable influence on the sound quality of your audio, and it pops up in all sorts of productions—recording, beat making, sampling, live shows, and more. When you use phasing to your advantage, it can result in interesting sounds. But phasing also has the potential to leave your tracks sounding thin and weak.
Phase relationships matter in many situations, including when:
Anytime you’re combining two or more similar signals, phase is the timing relationship between them. Timing differences between similar signals are often caused by different distances between microphones and the sound source (close mic vs distant mic or DI vs mic), different polarity or sound start points in samples, and latency incurred through digital processing.
The image below shows two complex waveforms perfectly in phase and the recorded sum of them, which results in an increase in amplitude (repeating two identical signals in phase results in 6 dB of gain).
The image below shows two complex waveforms perfectly out of phase and the recorded sum of them, which results in complete cancellation.
Out of Phase
The first image below shows two complex waveforms partially in phase and the recorded sum of them, which results in an increase in amplitude at some frequencies and a decrease in amplitude at others. The second image shows the in phase frequency response in white and the partially out of phase frequency response in blue. Listen to Audio Clip 2 to hear how a 180 sample offset impacts the tone.
Partially Out of Phase
Next, the image below shows two complex waveforms perfectly in phase visually, but out of phase due to processing latency from the plug-in on the second track (real-time plug-ins don’t visually affect the waveform). The audio clip toggles each bar between the in-phase sound and out-of-phase sound (caused by the plug-in latency).
Phase Shift from Plugin
Alright, so it’s relevant. What do you do when things sound tragic because of phase issues? Repositioning a microphone would help if you were actively miking up the source, but I want to focus on what you can do when that isn’t an option.
Imagine having two kick sample tracks with waveforms that appear to be phase-aligned. If a plug-in is inserted on the second track, and that plug-in produces 190 samples of latency at 48 kHz, the second kick signal will be delayed 4 milliseconds. Likewise, if a kick is sent through a bus for parallel processing on another track and the compressor plug-in used for the parallel compression adds 190 samples of latency at 48 kHz, the parallel compressed signal would be delayed 4 milliseconds.
The first image below shows the visual difference between summed in-phase kick hits (top waveform) and summed kick hits that are slightly out-of-phase (bottom waveform). The second image shows the in-phase frequency response in white and the slightly out-of-phase frequency response in blue. The changes are the result of approximately 4 milliseconds of delay on a layered kick sample. The audio clip toggles every four hits between in-phase and slightly out-of-phase kicks.
Kicks Frequency Comparison
Since layering samples and utilizing parallel processing are commonly-used techniques, the solution needs to be “set it and forget” to avoid interrupting your workflow. The fix is to enable latency/delay compensation. It will add delay to tracks as necessary to equal the amount of latency incurred on the most latent track.
Each DAW handles this in its own way. Some DAWs allow you to turn it on or off, while others always employ it. In some cases, there is a limit to the amount of latency the system can compensate for. Various types of plug-ins introduce more latency than others. It’s an easy fix. Just remember, it’s not addressing phase issues resulting from signals being misaligned due to recording or other reasons.
The concept of adding delay to fix phase issues can be applied and performed manually through delay plug-ins. If you know that one of your two snare tracks has a high-latency plug-in that your system cannot correctly compensate for, you can use a plug-in to delay the track without the high-latency plug-in.
Most delay plug-ins can add as little as 1 millisecond of delay, but that would still be 48 samples at 48 kHz. More accuracy would be ideal. Some delay plug-ins such as Logic’s Sample Delay, Avid’s Time Adjuster, and Voxengo’s Sound Delay offer micro delays as short as 1 sample. Eventide’s Precision Time Align allows adjustments as minute as 1/64th of sample! That’s more accurate than the native editing resolution in DAWs!
Using such plug-ins is simple:
If you have a rad metering plug-in such as iZotope’s Insight, you can use its Sound Field display to help you. For the snare example, you would pan the snare tracks hard left and right (opposite of each other) and use your solos and/or mutes to ensure that only the snare tracks are going to your main output’s master fader. Once Insight is inserted on the master fader, its Sound Field display will indicate cancellations from poor phase alignment as lower vertical activity and movement toward the negative end of the scale. Maximum vertical activity and movement toward the positive end of the scale indicates a good phase relationship. While watching the meter, vary the delay value in the plug-in until the Sound Field display shows the highest vertical activity.
Once that is done, return the track pans to their previous mix positions. Toggling the bypass on the delay plug-in lets you compare between the aligned tone and the pre-aligned sound.
When you can see that waveforms of similar signals are not visually aligned, the traditional approach is to move the regions until the waveforms are aligned. This is a dangerous power to have as you can create mind-boggling chaos in the phase department if you aren’t careful. If you just start sliding regions around all willy nilly, you might not be able to get back to your starting point. So, before moving any regions, save a copy of the session or duplicate the track layer (playlist).
In the image below, the top track is a bass DI signal and the bottom is a mic on the bass cab. Since a DI involves a direct analog electrical connection between the instrument and the audio interface, there is no inherent delay. However, the distance between a speaker and the mic creates a time delay. As a result, you should move the mic region toward the DI region.
Generally, the goal is to make the most significant peaks and dips line up (within reason). DAWs that allow you to move regions with sample accuracy yield more sonic control than programs limited to frame or tempo-based resolution. The audio clip toggles between the original unaligned and the aligned version every four notes.
If you like the idea of aligning waveforms, but wish there was an app for that, you’re in luck! MAutoAlign by Melda Productions and Auto-Align by Sound Radix are plug-ins designed to analyze audio and automatically apply delay and polarity inversion to align the audio signals. Each one handles two signals simultaneously, making them ideal for common two-track scenarios such as bass DI and mic, kick in and out, close guitar mic and room mic, etc. Both plug-ins have a similar operational style; fast and easy. The plug-ins do the work of comparing audio sources and choosing how much delay to add.
Moving outside the realm of software, there are analog hardware units purpose-built for real-time adjustment of phase relationships. One major advantage is portability—use them at your normal studio, take them to a live gig, or loan them to a trusted friend (and get some collateral). Also, being analog, they don’t induce latency.
The Little Labs IBP is a humble-looking box for modifying a single channel, whereas the Radial Phazerbank is capable of four channels of phase-tweaking greatness. Both offer XLR and 1/4" connectivity, polarity switching, and variable phase via a dedicated potentiometer. The IBP incorporates selectable phase shift range and high or low center frequency, while the Phazerbank utilizes a dry/wet control and a selectable low-pass filter with adjustable frequency.
The workflow with each unit is the same. For mixing applications, do either of the following.
If you are using a DAW to do this, latency will be incurred by the D/A and A/D conversion processes. If your system automatically compensates for hardware latency, no worries! If it doesn’t, then you’ll be creating a phase offset by sending it out of your audio interface and back in. The good news is that the versatile range of the IBP/Phazerbank is likely sufficient to compensate for that.
Phase is your friend, your enemy, and waiting to surprise you at every turn. Remember that visually aligning similar signals makes sense in theory, but ultimately you should consult your ears to confirm if you prefer the aligned sound. Play around with some tracks, experiment with timing, and always listen, listen, listen.