Human Hearing Explained: What Your Ears Really Do
Ever wondered why some frequencies hurt or why a 100Hz bass note needs to be 100 times more powerful than a 1kHz tone to sound equally loud? Here is the deep dive into human hearing, complete with audio illusions.
The Acoustic Journey: Outer, Middle, and Inner Ear
Sound is ultimately nothing but fluctuating mechanical air pressure, yet the human body translates these invisible shockwaves into the profound emotional experience of music, speech, and environmental awareness. The ear is not merely a passive funnel; it is an active, highly tuned biomechanical analyzer.
But how does literal air pressure become a neural signal? To understand this, we must trace a sound wave’s journey from the moment it strikes the side of your head to the instant your brain recognizes it as a symphony.
The human ear acts as an impedance-matching transformer and a precise frequency analyzer, capable of detecting vibrations smaller than the diameter of a single atom, while also withstanding acoustic power a trillion times greater without immediate destruction.



When an acoustic wave first arrives, it encounters the pinna—the fleshy, visible part of the outer ear. The pinna’s asymmetrical ridges are not merely decorative; they act as a complex directional filter. Depending on whether a sound originates from above, below, or behind you, the pinna shapes the wave differently, causing specific high-frequency cancellations that allow your brain to calculate sound localization.
As the wave travels down the ear canal, it hits the tympanic membrane (eardrum). The ear canal naturally resonates around 3kHz, mechanically amplifying frequencies in this critical bandwidth by nearly 10 decibels before they even reach the eardrum. This evolutionary trait perfectly overlaps with the primary spectral energy of human speech consonants.
Next, the eardrum vibrates the ossicles—the malleus, incus, and stapes (hammer, anvil, and stirrup). These are the three smallest bones in the human body. Because fluid is much harder to move than air, the middle ear must concentrate the acoustic pressure by a factor of over 20 to transfer the wave effectively into the fluid-filled cochlea of the inner ear.
The cochlea is a snail-shaped, fluid-filled organ lined with thousands of microscopic hair cells along the basilar membrane. High frequencies cause the base of the membrane to vibrate, while low frequencies travel further to resonate at the apex. Here, mechanical motion is finally converted into electrical nerve impulses—the beginning of true auditory perception.
The Frequency Domain: From Infrasound to Ultrasound
Human hearing is generally defined as spanning from 20 Hz to 20,000 Hz (20 kHz). Frequencies below 20 Hz are known as infrasound. While we cannot "hear" them as distinct tones, our bodies can feel them as physical vibration, often interpreting visceral infrasound as a sense of unease or awe—a phenomenon frequently exploited in cinema.
On the opposite end, frequencies above 20 kHz plunge into ultrasound. Dogs, bats, and dolphins rely heavily on this spectrum, but for humans, the high-frequency limit degrades almost immediately after childhood. The following audio test sweeps from 50 Hz up to 18 kHz. Notice exactly when the sound disappears for you.
50Hz → 18kHz. If you hear nothing at the end, that is normal.
The Fletcher-Munson Curves
Perhaps the most counterintuitive quirk of human hearing is our non-linear frequency response curve. Simply put: our ears are biased. A 100 Hz bass synth and a 1000 Hz vocal track played at the exact same physical amplitude (measured in SPL) will not sound equally loud to a human listener.
In the 1930s, researchers Harvey Fletcher and Wilden A. Munson discovered that human ears are overwhelmingly sensitive to mid-range frequencies and shockingly deaf to low frequencies when played quietly. To make a low bass note sound as loud as a mid-range note at low volumes, the bass must be pumped with vastly more acoustic power.
This biological quirk is the direct reason why music sounds "flat" and lifeless when you turn the volume down low at night—your ears have effectively filtered the bass out.
The 100Hz tone sounds very quiet compared to the 1kHz tone.
The 100Hz tone is now massively boosted to sound equally loud.
Auditory Masking: How MP3 Files Trick Your Brain
Beyond frequency bias, our hearing system suffers from a processing limitation known as auditory masking. If a loud sound and a quiet sound occur simultaneously (or within milliseconds of each other), the louder sound literally erases the quieter sound from your perception.
This happens because the loud sound physically overwhelms the hair cells in the cochlea, leaving them in a brief refractory period where they cannot register the quieter transient. This "flaw" in human hearing is the entire foundation for psychoacoustic data compression algorithms like MP3 and AAC.
A solid tone gets completely masked when the burst of noise happens.
