The most important part of any sound system: your ears and brain. Hearing explained

We talked a lot about audio gear: digital to analog converters, amplifiers, speakers, headphones, etc. But sometimes we forget about the most important audio gear out there, the gear we were born with, our ears and brain. This is where it all goes to, the final stop in our “audio system”.

This is part that makes different sound systems be appreciated in different ways by different people.

Through them sound offers us a powerful means of communication. The sense of hearing is helping us experience the world around through sound as well. However, human communication is multisensory, involving visual and tactile input besides the sound.

I think that our vision is the  sensory input that acquires the most data and requires the most brain processing power. While dealing with the environment, even if we use our hearing, we rely mostly on our vision.

So vision being the most trained / used sensory input we have, it’s easier to compare images and even remember  images than to remember/compare sounds. Practically if we aren’t musicians or work in the music field, our audio memory may not be very good by default.

This is an interesting test for audio (rhythm) memory. If you score badly and cannot tell the differences between different rhythms, it will be very hard to tell the differences between some audio gear ( as soundstage, details, dynamics, transients differences are harder to distinguish than rhythm changes). However, in time your audio memory gets better and your ears more sensible to these aspects.

This may be one of the main reasons the audio hobby grows on you in time.

You may also want to join the Philips Golden Ears Challenge which has very good training and tests. I have passed all the tests and received the “Golden Ears” award. I thought it was fun and actually quite helpful.

Sound

Before getting to the hearing input we should talk a little about sound itself. The sound has a physical basis. It represents vibrational energy.

I found  the explanation here to be very good:

It is created when a medium such as air, wood, metal, or a person’s vocal cords vibrate. Sounds carried as energy are transferred from one molecule to the next in the vibrating medium. To understand sound, consider the analogy in which a stone is dropped into a body of water. This action produces ripples that will spread out in all directions from the point where the stone contacted the water. The ripples become weaker (decrease in intensity) as they get farther away from the origin. So it is with sound. The vibration through a medium proceeds in waves. However, unlike ripples on water, sound waves move away from their point of origin in three dimensions, not just two.

Sound waves possess specific characteristics. Frequency represents the number of complete wave cycles per unit of time, usually one second (see Figure 5). Frequency is expressed in hertz (Hz), which means cycles per second. Low-frequency sounds are those that vibrate only a few times per second, while high-frequency sounds vibrate many more times per second. The term used to distinguish your perception of higher-frequency sounds from lower-frequency sounds is pitch.

sound time
Figure 5. Representation of frequency. The arrow indicates one cycle of the sound wave.

The speed of sound is constant for all frequencies, although it does vary with the medium through which it travels. In air, sound travels at a speed of roughly 340 meters per second. Sound travels fastest through metals because the molecules of that medium are packed very closely together. Similarly, sound travels about four times faster in water than in air. It follows that sound travels faster in humid air than dry air; in addition, humid air absorbs more high frequencies than low frequencies, leading to differences in the perception of sound heard through the two media. Finally, temperature can affect the speed of sound in any medium. For instance, the speed of sound in air increases by about 0.6 meters per second for each degree Celsius increase in temperature.

The human ear responds to frequencies in the range of 20 Hz to 20,000 Hz (20 kHz),18 although most speech frequencies lie between 100 and 4,000 Hz. Frequencies above 20,000 Hz are referred to as ultrasonic. Though ultrasonic frequencies are outside the range of human perception, many animals can hear these sounds. For instance, dogs can hear sounds at frequencies as high as 50,000 Hz, and bats can hear sounds as high as 100,000 Hz. Other sounds, such as some produced by earthquakes and volcanoes, have frequencies of less than 20 Hz. These sounds, referred to as infrasonic or subsonic, are also outside the range of human hearing.

1 hearing range

Figure 6. The sound spectrum.

We all know that sounds can be louder or softer, but what does this mean? Sound is energy, and this energy, when traveling through air, displaces, or vibrates, air molecules. For example, the softest sound humans can hear is a sound that displaces particles of air by one-billionth of a centimeter.13 The extent to which air particles move from their original resting point determines the amplitude of the sound wave (see Figure 7). The greater the amplitude of the sound wave, the greater the intensity, or pressure, of the sound. Intensity refers to the overall amplitude of a sound. This distinction in terms is necessary, since nearly all sounds to which we are exposed are complex sounds made up of a combination of sound waves. Loudness is our perception of the intensity, frequency, and duration of a sound.

Sound intensity is measured in relation to an accepted reference point. One such reference is the threshold at which a sound can be heard. How the intensity of any given sound compares with this standard reference level is given in units known as decibels (dB). The decibel is one-tenth of a bel, a unit named after the inventor Alexander Graham Bell. The decibel scale is not a linear one, but rather represents the ratio of the sound to the reference standard. To understand why ratios are necessary, consider the tremendous range of sound intensities we are capable of hearing. Scientists estimate that the human ear is sensitive to about 100,000,000,000,000 (1014) units of intensity. Also consider that a shout is about 1,000,000 (106) times more powerful than a whisper. Because dealing with such large numbers is cumbersome, the decibel scale is used to simplify comparisons (see Table 1). Every 10-dB increase in sound intensity represents a 10-fold increase in sound intensity and a perceived doubling in loudness.

I also found something that a lot of people tend to ignore, especially kids. I remember while being in high school, staying in the center of very loud music in clubs. Luckily I wasn’t a big fan of clubs and I did not go there often.

Individuals are often unaware of the damage loud noise does to their hearing. Even common noises, such as highly amplified music and gas-engine mowers or leaf blowers, can damage human hearing with prolonged exposure. Sporting events can also expose individuals to hazardous decibel levels as defined by the Occupational Health and Safety Administration (OSHA). Under OSHA guidelines, the limit of continuous noise exposure for an eight-hour day in an industrial setting is 90 dB. OSHA also prohibits workplace impact noise (short bursts of sound) greater than 140 dB. By increasing our awareness of decibel levels of common environmental noises, we can better limit our exposure to hazardous noise levels or take measures to protect our ears.

Even common noises, such as highly amplified music and gas-engine mowers or leaf blowers, can damage human hearing with prolonged exposure.

You should also take care of your ears when going to concerts, as while you are staying in front of the loudspeakers, you’ll expose yourself to about 120 dB sound pressure levels. This will begin to damage your hearing in ~7 minutes.

Hearing System

Let’s get back to our hearing system. As I said before, it is formed by to major components: ears and brain. Yes, no matter the senses used, the understanding occurs in the brain:

brain1

Did you know that there are specialized cells in the inner ear that have the responsibility of converting the vibrational waves of sound into electrical signals that can be interpreted by the brain? This sounds like an ADC (Analog to Digital Converter) to me. So now what do we do for the recording – listening music process? The first step is the recording step, which, usually these days is in digital format.

So we really have to use an ADC for that. Then if we need to play the music we have to use our entire sound system which could mean a huge stack of gear: source – > DAC  – > Amplification -> Speakers/Headphones -> Ears . And inside the ears, the signal is again transformed into digital signals. Wouldn’t it be nice to transfer directly as signals into the brain? This could be great for deaf people as well.

This is a very interesting study about the subject:

The cochlear implant, a micro electrode array that directly stimulates the auditory nerve, has greatly benefited many individuals with profound deafness. Deaf patients without an intact auditory nerve may be helped by the next generation of auditory prostheses: surface or penetrating auditory brainstem implants that bypass the auditory nerve and directly stimulate auditory processing centers in the brainstem.

z Ear Implant

Before getting to the brain, let’s take a closer look to the ear. Many people think that they don’t require any preventive maintenance. Sometimes we take hearing for granted and we don’t take good care of our ears, but we’ll get back to this later.

This is the structure of our ears:

z ear structure

When the sound arrives in the ear it is processed in a few different steps. These steps happen in the 3 sections  : the outer ear, middle ear and inner ear.

The pathway from the outer ear to the inner ear is remarkable in its ability to precisely process sounds from the very softest to the very loudest and to distinguish very small changes in the frequency of sound (pitch). Humans can discern a difference in frequency of just 0.1 percent. This means that humans can tell the difference between sounds at frequencies of 1,000 Hz and 1,001 Hz.

The outer ear is formed by the portion named auricula or pinna ( the part that we can all see ) and the ear canal. The pinna has the role of collecting the sounds and focus them into the middle and inner portions of the ear. It also helps in determining the direction from which a sound originates.

The ear canal’s length is about 2.5 cm and it leads to the eardrum of the middle ear. The ear canal also has glands that secrete a wax like substance. The wax with the present hair prevents dust, insects or other objects going deeper in the ear, while keeping a constant humidity and temperature in the middle ear.

Individuals should not attempt to remove earwax, since this secretion will work itself out of the canal naturally in most cases. To avoid damage, it should be removed by a medical professional. Hearing researchers strongly concur with the truth of the adage: Put nothing smaller than your elbow into your ear.

 The ear canal acts as an amplifier for sound frequencies between 3,000 and 4,000 Hz.

The middle ear is separated by the outer ear by the eardrum which is a continuously  growing structure. This means that damage to the membrane can be generally repaired. The eardrum is elastic and this allows it to vibrate in response to sound waves.

The elegance of the middle ear system lies in its ability to greatly amplify sound vibrations before they enter the inner ear.

The middle ear is an air-filled space. It is connected to the back of the throat by a small tube called the eustachian tube, which allows the air in the middle ear space to be refreshed periodically. The eustachian tube can become blocked by infection, and fluid may fill the middle ear space. Changes in air pressure can also affect the tympanic membrane, resulting in the ear-popping phenomenon experienced by people who fly in airplanes or drive over mountain roads. The membrane may bend in response to altered air pressure and then “pop” back to its original position when the eustachian tube opens and internal and external air pressures are equalized.

The inner ear has the important sensory hearing cells : the outer hair cells and the inner hair cells. The outer ones act like a biological amplifier/attenuator, boosting soft sounds and dampening loud sounds. The inner hair cells transfer sound information into the auditory nerve. The auditory nerve transfers sound information to the various brainstem and auditory cortex regions in the brain so the information can be processed.

This article has some very interesting information about the audio processing of the brain .

The brainstem receives data streams from both ears in the form of firing patterns that include information about the incoming audio signals. First, the brainstem feeds back commands to the middle ear muscles and the inner ear’s outer hair cells to optimise hearing in real time. Although this is a subconscious process in the brainstem, it is assumed that the feedback is optimized through learning; individual listeners can train the physical part of their hearing capabilities.

audio quality brain process

This is where the processing power of the brain kicks in.

 Rather than interpreting, storing and recalling each of the billions of nerve impulses transmitted by our ears to our brain every day, dedicated parts of the brain – the brain stem and the auditory cortex – interpret the incoming information and convert it into hearing sensations that can be enjoyed and stored as information units with a higher abstraction than the original incoming information: aural activity images and aural scenes.

The information streams are received by the brainstem and then sent to the auditory cortex where aural activity images like  level, pitch and localisation are created.

Comparing the aural activity images with previously stored images, other sensory images (eg. vision, smell, taste, touch, balance) and overall context, a further aggregated and compressed aural scene is created to represent the meaning of the hearing sensation. The aural scene is constructed using efficiency mechanisms – selecting only the relevant information in the auditory action image, and correction mechanisms – filling in gaps and repairing distortions in the auditory activity images. 

The aural scene is made available to the other processes in the brain – including thought processes such audio quality assessment.

So the auditory cortex converts the raw audio data into aural information that helps you perceive the sound as you do.

The science of describing, measuring and classifying the creation of hearing sensations by the human auditory cortex is the area of psycho-acoustics. In this paragraph we will very briefly describe the four main psycho-acoustic parameters of audio characteristics perception: loudness, pitch, timbre and localization.

pshychoacoustics

Conclusion

So, as you can see, the process of hearing is  very complex and is formed of many steps.

Besides the ear, the brain has a lot of processing to do, processes that may not be fully understood at the moment.

There are a lot of debates in audio with people saying that after a certain level your ears cannot hear the differences. While that may be true, the “ADC” inside the ear and the processing inside the brain may do some stuff that may make a difference after all.

I’ve recently bought an amplifier that has a S/N Ratio of >130DB and THD < 0.002%. While some say that in theory this cannot be heard, I have to say that this amplifier has the cleanest sound I’ve heard until now. There is an audible difference between my old amplifiers and this one in terms of transparency. I’ve also brought 2 friends of mine that are not audiophiles at all. They both heard the difference.

I have found an interesting study  that shows something similar to what I have described in the upper rows.

Although it is generally accepted that humans cannot perceive sounds in the frequency range above 20 kHz, the question of whether the existence of such “inaudible” high-frequency components may affect the acoustic perception of audible sounds remains unanswered. In this study, we used noninvasive physiological measurements of brain responses to provide evidence that sounds containing high-frequency components (HFCs) above the audible range significantly affect the brain activity of listeners.

The study is very dense so I will get directly to the main conclusions.

We conclude, therefore, that inaudible high-frequency sounds with a nonstationary structure may cause non-negligible effects on the human brain when coexisting with audible low-frequency sounds. We term this phenomenon the “hypersonic effect” and the sounds introducing this effect the “hypersonic sound.”

From an authentic view of human auditory physiology, it is not straightforward to explain the neuronal basis of the hypersonic effect characterized by the fact that HFCs showed significant physiological and psychological effects on listeners only when presented with audible sounds. Although how inaudible HFCs produce a physiological effect on brain activity is still unknown, we need to consider at least two possible explanations. The first is that HFCs might change the response characteristics of the tympanic membrane in the ears and produce more realistic acoustic perception, which might increase pleasantness.

In conclusion, our findings that showed an increase in alpha-EEG potentials, activation of deep-seated brain structures, a correlation between alpha-EEG and rCBF in the thalamus, and a subjective preference toward FRS, give strong evidence supporting the existence of a previously unrecognized response to high-frequency sound beyond the audible range that might be distinct from more usual auditory phenomena.

So, the hearing system is not simple at all and we don’t have all the explanations of how it works.

I personally am pretty confident on my hearing and sound interpretation. So, when somebody tells me that what I hear is not possible because of some “theoretical” reasons, I ask him if he truly knows how the sound is perceived by our hearing system.

For me is like when I am seeing a green object, somebody comes along and says that object is yellow and it is impossible to be green because it hasn’t been proved our eyes can see the green color. If I see it green, then to me it is green and that is that. That just means that they didn’t find the right explanation yet.

These being said, don’t forget that your ears and brain are the most important part of your audio system.

So, have your hearing checked periodically.

I’ve also written an article about hearing safety and noise induced hearing loss. 

ear

Bibliography: