We’ve all been there—at a noisy party, surrounded by voices, yet somehow able to focus on just one person’s words. A team that included Johns Hopkins engineers has uncovered insights into how the brain pulls off this feat, showing its remarkable ability to “lock onto” a single voice amid background noise. Studying ferrets, the researchers found that the brain not only amplifies its response to the target sound but also boosts neural activity across regions involved in complex processing—a skill essential for social interaction and survival.
The team’s work, “Temporal coherence shapes cortical responses to speech mixtures in a ferret cocktail party”, which appears in Nature Communications Biology, could improve hearing technology, especially for those who rely on hearing aids in noisy settings.
“Our findings show that when the ferrets concentrate on the target voice, their brains exhibit responses to that voice across various regions, especially those responsible for more complex processing. This enables the brain to differentiate the target sound from the background noise,” said co-author Karan Thakkar, a graduate student in the Whiting School of Engineering’s Department of Electrical and Computer Engineering.
Researchers believe this effect is achieved through a well-known brain mechanism called “temporal coherence”.
“Temporal coherence is the principle that the brain naturally synchronizes its responses to the timing and features of the target sound,” said team member Mounya Elhilali, a professor of electrical and computer engineering. “When the ferrets in our study focused on a specific voice, their brains aligned with its timing and sound features, creating a mental pathway that helps isolate it from other sounds.”
In the study, researchers trained ferrets—chosen because their auditory processing systems resemble that of humans—to recognize a target word, “/Fa-Be-Ku/,” spoken by a female voice, even in the presence of a competing male voice uttering similar syllables. The team then recorded the ferrets’ brain activity, mapping how the brain processed auditory information across different regions. They measured responses in both primary and secondary auditory fields, which handle basic sound processing, as well as in the frontal cortex, which manages attention and higher-level processing. The result confirmed temporal coherence was at work.
The researchers’ computer model confirmed this conclusion. When it “listened” to overlapping voices, it demonstrated how paying attention to a single voice made that voice stand out more clearly—just as it did in the ferrets.
“This finding suggests that our brains use the same strategy in real life, helping us make sense of noisy situations by tuning in to one sound source at a time,” said Elhilali.
The authors believe that understanding selective hearing has important implications for auditory science.
“One of the biggest challenges has been solving the cocktail party problem—how we pick out one voice from many in a noisy setting,” said Thakkar. “Temporal coherence offers a promising explanation, as it helps the brain naturally bind neurons responding to the same sound, making the target voice stand out from distractions.”
This research also has the potential to improve hearing technology, especially for hearing aids that struggle to help wearers comprehend voices in noisy settings.
“Current hearing aids often have difficulty separating sounds in noisy environments,” said Elhilali. “By mimicking how the mammalian auditory system (like ferrets and humans) solves this problem, it is possible to develop technologies that enhance selective hearing, a benefit to those who rely on hearing aids and individuals with auditory processing challenges.”
The study was led by Neha Joshi and Shihab Shamma of the University of Maryland. Co-authors include Wing Yiu Ng, and Pingbo Yin, also of the University of Maryland; as well as Daniel Duque from the University of Salamanca, and Jonathan Fritz from New York University.