Improving Hearing Prosthetics
New research, published in the November edition of PLOS Computational Biology, offers insight into how the the brain processes timbre, a hard-to-quantify concept loosely defined as everything in music that isn’t duration, loudness or pitch.
The information may one day change the design of hearing prosthetics, potentially helping people who suffer from hearing loss to continue to tap into their musical intuition in a way current devices on the market cannot, according to Mounya Elhilali, the study’s lead author and an assistant professor in the Department of Electrical and Computer Engineering in the Whiting School of Engineering.
“Our research has direct relevance to the kinds of responses you want to be able to give people with hearing impairments,” says Elhilali. “People with hearing aides or cochlear implants don’t really enjoy music nowadays, and part of it is that a lot of the little details are being thrown aways by hearing prosthetics.”
The researchers set out to examine the neural underpinnings of musical timbre in an attempt to both define what makes a piano sound different than a violin and to explore the process underlying the brain’s way of recognizing timbre. Based on experiments in animals and humans, they devised a computer model that can accurately mimic how specific brain regions process sounds as they enter our ears and are transformed into brain signals that allow us to recognize different types of sounds.