Premium
This is an archive article published on September 11, 2011

Sound,the way the brain prefers to hear It

In designing the next great audio system,researchers are invoking the science of psychoacoustics

There is,perhaps,no more uplifting musical experience than hearing the Hallelujah chorus from Handel’s Messiah performed in a perfect space. Many critics regard Symphony Hall in Boston—70 feet wide,120 feet long and 65 feet high—as just that space.

Some 3,000 miles away,however,a visitor led into the pitch-blackness of Chris Kyriakakis’s audio lab at the University of Southern California to hear a recording of the performance would have no way to know how big the room was. At first it sounded like elegant music played in the parlour on good equipment. But as engineers added combinations of speakers,the room seemed to expand and the music swelled in richness and depth,until finally it was as if the visitor were sitting with the audience in Boston.

Then the music stopped and the lights came on. It turned out that the Immersive Audio Lab at U.S.C.’s Viterbi School of Engineering is a bit dingy,and only 30 feet wide,45 feet long and 14 feet high. Acousticians have been designing concert halls for more than a century,but Dr Kyriakakis does something different. He shapes the sound of music to conform to the space in which it is played. Kyriakakis,an electrical engineer at U.S.C. and the founder and chief technical officer of Audyssey Laboratories ,a Los Angeles-based audio firm,could not achieve his results without modern sound filters and digital microprocessors. But the basis of his technique is rooted in the science of psychoacoustics,the study of sound perception by the human auditory system. “It’s about the human ear and the human brain,and understanding how the human ear perceives sound,” Kyriakakis said. Psychoacoustics has become an invaluable tool in designing hearing aids and cochlear implants,and in the study of hearing.

Story continues below this ad

The field’s origins date back more than a century,to the first efforts to quantify the psychological properties of sound. What tones could humans hear,and how loudly or softly did they need to be heard? Human hearing can discern the movement of sound with a surprising degree of accuracy. It can remember patterns of speech,to immediately identify a friend in a phone call years after last hearing the voice. And a parent can effortlessly sift the sound of an infant’s cry from the blare of a televised football game. “What is it about that sound that we can identify?” said William M. Hartmann,a Michigan State University physicist and former president of the Acoustical Society of America. For much of the 20th century,engineers devoted themselves to developing acoustical hardware like amplifiers,speakers and recording systems. After World War II,scientists learned how to use mathematical formulas to “subtract” unwanted noise from sound signals. Then they learned how to make sound signals without any unwanted noise. Next came stereo. But stereo had no real psychoacoustics. It created an artificial sense of space with a second track,but did so by dealing with only one variable—loudness—and enhanced human perception by suggesting that listeners separate their speakers. The digital age changed all this and digital technology led to innovations that have been critical in improving sound reproduction,in tailoring hearing aids for individual patients and in treating hearing impairment and developing cochlear implants—tiny electronic devices linking sound directly to the auditory nerve of a deaf person.

Despite recent advances,however,psychoacoustics has shown engineers they still have a long way to go. No machine can yet duplicate the ability of the human ear to understand a conversation in a crowded restaurant. “The technology is really being strained,” said Dr Hartmann,at Michigan State. Because of psychoacoustics,“we know so much more,and we can do so much more,” but “there is so much more to do.” One factor that slows the pace of innovation,Hartmann suggested,is that the human auditory system is “highly nonlinear.” It is difficult to isolate or change a single variable—like loudness—without affecting several others in unanticipated ways.

It was this anomaly,in part,that led Kyriakakis in the 1990s to venture into psychoacoustics. He,his U.S.C. film school associate Tomlinson Holman,and their students were trying to improve the listening qualities of a room by measuring sound with strategically placed microphones. “Often our changes were worse than doing nothing at all,” Kyriakakis recalled. “The mic liked the sound,but the human ear wasn’t liking it at all. We needed to find out what we had to do. We had to learn about psychoacoustics.” So Kyriakakis and his students went to Boston Symphony Hall in 1998 to conduct a series of sound tests and to record the Messiah. At that time,acousticians had long known that a shoebox-shaped concert hall like Boston’s offered the best sound,but what was important for Kyriakakis was to know why the human ear and the human brain that processed the signal felt that way.

Back in Los Angeles,his team began a series of experiments. Listeners were invited into the labs to hear the Boston tests and music and to rate the sound,using a scale of 1 to 5. Researchers shifted the sound to different combinations of speakers around the room. Statistics showed that speakers directly ahead,combined with speakers 55 degrees to either side of the listener,provided the most attractive soundstage. The “wide” speakers mimicked the reflection from the side walls of the concert hall by causing the sound to arrive at the listener’s ears milliseconds after the sound from the front. Sound from other angles did not have as great an effect.

Story continues below this ad

Next,the team asked listeners what combination of speakers gave the best impression of “depth of stage.” Here again,statistics showed a clear preference for speakers in front of listeners and high above them. This sound—also slightly delayed—gave the ear and the human brain a sense of where the different instruments were on a bandstand. With these results as his template Kyriakakis founded Audyssey. His idea was to make dens and living rooms sound like concert halls and movie theatres. Microprocessors made it possible to filter sound to minimise distortion and add the delays that make the music sound nearly perfect to the human ear from anywhere in the room. GUY GUGLIOTTA

Latest Comment
Post Comment
Read Comments
Advertisement
Advertisement
Advertisement
Advertisement