Photo of professor Mounya Elihali wearing headphones

At a crowded party or while listening to an orchestra, humans can discern a specific voice or instrument. How does our auditory system accomplish this so skillfully? This task, known as the “cocktail-party problem,” is challenging for machines, but humans seem to do it effortlessly.

A study led by Mounya Elhilali, a faculty member in the department of Electrical and Computer Engineering and the Johns Hopkins Whiting school of Engineering, reveals that the human auditory system uses a complex process to focus on specific sounds. The team’s results appeared in EURASIP Journal on Audio, Speech, and Music Processing

“In this study, we found that the brain employs a combination of strategies—including considering sound patterns, frequencies, and other characteristics, relying on memory to recognize familiar sounds, and deploying these memories to focus on sounds that are important to it—to allow us to pay attention to a single sound in noisy or cacophonous environments,” said Elhilali, Charles Renn Faculty Scholar and Professor in the Whiting School’s Department of Electrical and Computer Engineering.

The team found that the brain uses the same mechanisms whether focusing on speech or musical sounds, challenging a long-held notion that speech and music require domain-specific processing.

“We found that common computational principles govern all sound source separation,” Elhilali said.

In addition to improving understanding of the human auditory system, the study results could lead to the development of better speech recognition systems, noise-canceling headphone technology, hearing aids, cochlear implants, and other devices.