An Attention-Grabbing Finding

Fall 2010

ElhilaliMost people have had the experience of walking into a noisy room full of people and at first being aware only of an indecipherable babble. But we quickly are able to tune into certain sounds in the environment—the music, or a particular conversation, for instance—and block out the others. While this ability seems natural, it’s not entirely clear how we manage it. And so far, we don’t know how to build machines that do it very well either. Mounya Elhilali, assistant professor of electrical and computer engineering and a researcher at the Center for Language and Speech Processing, is working to untangle just how the brain manages this complex task. Her work could lead to better telephone voice automation systems, improved hearing aids—even smarter eavesdropping devices.

“Our research tries to understand how the brain solves this problem, and how we can translate that into machines that interact with the environment, no matter how complex the environment is,” Elhilali says.

In an article in PLOS Biology, Elhilali and colleagues looked at how the brain switches between different auditory signals, and discovered that the process is a combination of conscious and unconscious processes.

In the experiment, volunteers were asked to listen to an auditory “scene,” which consisted of a beep that repeated four times a second, against a background of random sounds. The volunteers’ neural signals were simultaneously measured with a magnetoencephalograph (MEG), which detects magnetic fields created by electric currents in the brain.

The MEG showed a signal pulsing at four times a second, coming from a brain structure called the auditory cortex. The device was detecting the activity of neurons, which were dedicated to representing the target sound. Then the researchers asked subjects to pay attention either to the regular pulse, or to the background sound. When the volunteers focused on the regular pulse, the measured magnetic field became up to four times stronger—indicating that more of the brain was being dedicated to the task of representing the sound.

The researchers also could see the brain signal increase when they made the signal louder—but never as much as it increased when the subjects were simply asked to pay attention to it.

“Our study shows that when we focus our attention to one sound among a number of competing background sounds, our brain boosts the representation of this target sound relative to all other sounds,” says Elhilali. “The brain responds more vigorously to this object of attention and also causes different populations of neurons to all respond at the same time to the target sound.”

Elhilali says the results give researchers some insight into how the brain untangles sounds. Specifically, it shows how a “top down” brain process, like attention, interacts with a “bottom up” process like salience, in which a loud or sudden noise automatically grabs our attention.

Such insights, in addition to shaping better technology, could eventually help doctors more clearly understand why the ability to make sense of our environment aurally goes awry due to cognitive impairment and aging.