Blog 2: Compressing compression into a five-minute read!

Let's take the bewildering topic of audio compression-- and compress it: Here's the only reason your recording will need it.

8/26/20256 min read

In this blog, I’m going to tackle the challenging mixing concept of compression. Of all the tools in the audio engineer’s toolbox, it is second only to EQ (equalization).

Audio engineers go to weekend seminars on the topic of compression, but I’m going to make it very easy—because it is. And that is because what my years in mixing have taught me is that there is only one reason a mix should have compression, and that reason is explained by a term that doesn't even exist (so I had to make it up): the need for gravitization.

Okay, okay, we do need to do a little audio physics lesson first, because most articles you read on compression will tell you its main purpose is to “control dynamic range”, which is true, but controlling dynamic range is done by using this emulated form of selectively increasing the gravitational force bearing upon the loudest sound waves in a recording. So bear with me a tad here.

A voice narrator or a singer projects their voice into a microphone. The result is a song, a narrated voice, whatever. Without doubt, some of the words or pitches that get recorded will be louder than others. That is the nature of the human voice, and it is the nature of musical instruments.

A problem in audio occurs when the difference between the softest sounds made in the recording and the loudest ones are too great. Take a standard rock song. You are rocking away listening to it, and it sounds good, but you notice you are struggling to hear the some of the softer lyrics. No problem, you turn the volume up. Now you can hear the lyrics fine, but the louder parts of the song sound too "blarey". So you turn it down but now the soft parts are too soft.

This problem in audio is called “excessive dynamic range”. Compression is a very clever tool that was invented to deal with this issue. When the sounds are fed into a computer, (in the earlier days, they were fed into a piece of analog equipment, but the concept is the same), a compressor is added to the sound. The compressor says “I am going to lower the decibel level (volume) of any sounds louder than a certain threshold level. The softer sounds I’ll leave go.” The result is the louder sounds have their volume level reduced (attenuated). Now the softer sounds can be heard more easily while one is listening to the song as they now have a higher ratio of volume to the overall volume of the song. Yes, clever.

In music--and even spoken voice narrations--it is the exception that a recording will not have some dynamic range issue causing the listener to hear loud volumes at the expense of the softer ones or vice versa. Every human voice I’ve listened to in the studio has some dynamics of it that are louder than other dynamics of it at that voice’s different frequencies over that voice’s frequency spectrum. Some of those difference sound good. Some of them don’t. So the engineer applies compression to the ones that don't, and the problem is solved. So far, simple.

The complications surrounding compression have arisen by virtue of the many ways that audio engineers have seen that compression may be applied. We have seen it has everything to do with dynamic range. So let’s consider these dynamic range situations:

-1. A voice narrator gets too close to the mic for part of her narration, and so her voice is louder for those few seconds compared to the rest of the narration. Or she pronounces some consonents too loudly or softly compared to others, and the narration sounds unnatural.

-2. A singer gets too loud every time she sings F# above middle C. That's just the nature of his voice.

-3. Another singer sings the chorus so loud that either it is too loud or the verses are too soft for the whole song.

-4. The very first “hit” in the sound of each beat of the snare drum is too punchy.

-5. One singer singing harmony simply sings a few notes too softly each time compared to the other singers.

-6. The sound of each bass guitar note fades away too quickly so it can no longer be heard. This common problem is particularly an issue when the bass guitar is competing with a kick drum to be heard in a song.

-7. The whole mix just doesn’t sound quite cohesive.

And I could go on. Each of these examples has a dynamic range problem built into it. Accordingly, each of these problems are often addressed by compression. Now comes the dilemma every audio engineer knows: Every time compression is used, there is a risk that another problem is created.

Compression can bring up unwanted softer sounds. It can make the sound being compressed sound unnatural or “squashy”. The compressor itself can sometimes be heard “pumping” as it attacks and releases. It may solve a dynamic range issue in one part of the recording but create an overly-compressed range in another. Or it may make the overall recording sound too “tight” whereas it may need to “breathe” in certain places.

As I struggled with sorting all of this out as an engineer, I began to notice that the engineers who were teaching these techniques were typically quite experienced, and many of them started their careers in the “analog era” where the compressing was done with studio hardware before the digital age came along. After all, compression was first used as an audio engineering tool in the 1930-40’s and the digital age did not assume ascendency until the 1980-90’s. Those teachers of audio in the second half of the 20th century laid the groundwork for what we do today.

That experience, of course, is a good thing and was invaluable to me, but I also kept getting introduced to newer and newer technologies that would target just one of the seven problems mentioned above and do a very good job at it. I won’t get too technical here, but audio engineers will recognize the terms "transient shapers", "automation", and "clip gaining" among those terms. Each of these techniques—all of them purely digital—offered solutions to the above compression problems, and, when done correctly, far less problems in my experience that the compression left.

Except one. There was still one of those issue in which compression was king. But only that one. That one is number seven on my list. “The whole mix doesn’t sound quite cohesive”. This is the one effect that compression in my opinion does very well and when used judiciously does without any nasty “side effects”. Which brings us back to the more artistic than scientific but very descriptive term “gravitization”.

But what does this have to do with gravity? Hold on, I'm getting there.
Most audio engineers spend the first year or two of their training trying to "hear compression”. I certainly struggled with this concept. What was I supposed to be listening for? Then one day while I was taking one my my usual long walks, I got to thinking about a lesson on compression I had taken earlier that day. A crazy thought experiment came to me. "What if I were listening to a song on the moon? What would if sound like?"

Well, sound waves are pressure waves. They place pressure on their surrounding atmosphere. We know there is very little atmosphere on the moon, so the sound would be very wispy and thin as the moon’s gravitational force holding sound waves together is so much weaker. What if I were listening to the song on a very heavy planet? It seemed it would certainly sound thicker and denser as the sound waves would be compacted more heavily as they traveled to my ears.

That little thought experiment lit the light for me: A little compression makes audio sound as if the atmosphere in which sounds are being made is a bit denser than we normally hear them—a little "pulled together" if you will. And as a general rule that makes recorded music and spoken voices sound better. When I got home, I flipped some compression on a song I was working on, and yes, I could hear it! And now when I listen to recorded soud, I can easily hear it—or hear the need for it. It may sound a little crazy, but I ask myself as a mixer on virtually every recording I work on if the "song needs a little more gravitization; that is, would it sound better if the earth's gravity were pulling it together just a bit more?"

That epiphany moment helped my mixes improve considerably. Almost every recording of a musical instrument or a human voice can use a little gravitization. It just makes the whole thing sound like it fits together. Used this way, compression has been a good reliable friend of mine in all of my post production voice work as well as mixing work. And this singular use of compression is generally side-effect free.

Just as importantly, I leave the other six bullet points for more advanced digital techniques that have developed since the advent of compression (like the aforementinoed transient shapers, clip gaining, and automation) as these needs having nothing to do with needing the mix itself to be a bit "tighter" or "glued together" .

Keeping these other problems separate from the concept of compression has made a difference in my mixing. We’ll explore the latest techniques audio mixers can use on them in future blogs.








Compressing compression--the one reason your recording needs it