Blog 2: Compressing compression into a five-minute read!

Let's take the bewildering topic of audio compression-- and compress it: Here's the only reason your recording will need it.

8/26/20255 min read

I’m going to tackle one in this blog of the challenging mixing concept of compression. In terms of all the effects in the audio engineer’s toolbox, it is probably the second most important tool he or she has next to EQ (equalization).

Audio engineers go to weekend seminars on the topic of compression, but I’m going to make it very easy—because it is. And that is because what my years in mixing have taught me is that there is only one reason a mix should have compression, and that reason is explained by a very nontechnical term: glue.

Okay, okay, we do need to do a little audio physics lesson first, because any article you read on compression will tell you it’s purpose is to “control dynamic range”, which is true. So bear with me a tad here.

A voice narrator or a singer projects their voice into a microphone. The end result is a narrated book, a song, whatever the art form is they are creating. Without doubt, some of the words or pitches that get recorded will be louder than others. That is the nature of the human voice, and, it is the nature of musical instruments.

A problem in audio occurs when the “dynamic range” between the softest sounds made in the recording and the loudest ones are too great. Take a standard rock song. You are rocking away on a song but you notice you can’t hear the singer sing some of the softer lyrics. No problem, so you turn the volume up. Now you can hear them, but now the loud parts are annoyingly too loud. So you turn it down but now the soft parts are too soft.

This problem in audio is called “excessive dynamic range”. Compression is a very clever tool that was invented to deal with this issue. The sounds, say the rock song, are fed into a computer. (In the earlier days, they were fed into a piece of analog equipment, but the concept is the same). A compressor is added to the sound. The compressor says “Any sounds louder than a certain decibel level are going to be reduced. The softer sounds I’ll leave go.” The result is the louder sounds have their decibel level reduced (attenuated). The result is now the softer sounds can be heard as they now have a higher ratio of volume to the adjusted louder sounds. Yes, very clever.

In music, it is the exception that a recording will not have some dynamic range issue causing the listener to hear loud volumes at the expense of the softer ones or vice versa. In voice narrations, the problem is more subtle, but it is still there. Every human voice I’ve listened to in the studio has some dynamics of it that are louder than other dynamics of it at that voice’s different frequencies over that voice’s frequency spectrum. Some of those difference sound good. Some of them don’t.

The complications surrounding compression have arisen by virtue of the many ways that audio engineers have seen that compression may be applied. We have seen it has everything to do with dynamic range. So let’s consider these dynamic range situations:

-1. A voice narrator for every couple of seconds or so gets too close to the mic and so is louder for those few seconds compared to the rest of the narration.

-2. A singer gets too loud every time she sings F# above middle C.

-3. Another singer sings the chorus so loud that either it is too loud or the verses are too soft for the whole song.

-4. The very first “hit” in the sound of each beat of the snare drum is too punchy.

-5. One singer singing harmony simply sings a few notes too softly each time compared to the other singers.

-6. The sound of each bass guitar note fades away so it can no longer be heard a bit too quickly.

-7. The whole mix just doesn’t sound quite cohesive.

And I could go on. Each of these examples has a dynamic range problem built into it. Accordingly, each of these problems are often addressed by compression.

Now comes the dilemma every audio engineer knows: Every time compression is used, there is a risk that another problem is created.

Compression can bring up unwanted softer sounds. It can make the sound being compressed sound unnatural or “squashy”. The compressor itself can sometimes be heard “pumping” as it attacks and releases. It may solve a dynamic range issue in one part of the recording but create an overly-compressed range in another. Or it may make the overall recording sound too “tight” whereas it may need to “breathe” in certain places.


As I struggled with sorting all of this out as an engineer, I began to notice that the engineers who were teaching these techniques were typically quite experienced, and many of them started their careers in the “analog era” where the compressing was done with studio hardware before the digital age came along. After all, compression was first used as an audio engineering tool in the 1930-40’s and the digital age did not really assume ascendency until the 1980-90’s. Those teachers of audio in the second half of the 20th century laid the groundwork for what we do today.

That experience, of course, is a good thing and was invaluable to me, but I also kept getting introduced to newer and newer technologies that would target just one of the seven problems mentioned above and do a very good job at it. I won’t get too technical here, but audio engineers will recognize the terms "transient shapers", "automation", and "clip gaining" among those terms. Each of these techniques—all of them purely digital—offered solutions to the above compression problems, and, when done correctly, far less problems in my experience that the compression left.

Except one. There was still one of those issue in which compression was king. But only that one. That one is number 7 on my list. “The whole mix doesn’t sound quite cohesive”. This is the one effect that compression in my opinion does very well and when used judiciously does so without any nasty “side effects”. Which brings us back to the more artistic than scientific but very descriptive term “glue”.

Most audio engineers spend the first year or two of their training trying to ‘hear compression”. I certainly struggled with this concept. What was I supposed to be listening for? Then one day while taking a long walk ruminating over lesson on compression I took online earlier in the morning, a thought experiement5 came to me. What if I were listening to a song on the moon? What would if sound like?

Well, I recall from school that there is very little atmosphere on the moon so the sound would be very washy and thin as the moon’s gravitational force is so much weaker. What if I were listening to the song on a very heavy planet? It seemed it would certainly sound thicker and denser as the sound waves were compacted more heavily as they traveled to my ears.

That little thought experiment lit the light for me: A little compression makes audio sound as if the atmosphere in which sounds are being made is a bit denser than we normally hear them—glued together if you will. When I got home, I flipped on some compression on a song I was working on, and yes, I could hear it! That’s what it sounded like! And now when I clisten to recorded soud, I can easily hear it—or hear the need for it.

That epiphany moment certainly helped my mixes improve considerably. Almost every recording of a musical instrument or a human voice can use a little glue. It just makes the whole thing sound like it fits together. Used this way, compression has been a good reliable friend of mine in all of my post production voice work as well as mixing work. And this singular use of compression is generally side-effect free.

Just as importantly, I leave the other six bullet points for more advanced digital techniques that have developed since the advent of compression. We’ll explore some of those in future blogs.








Compressing compression--the one reason your recording needs it