Our lab is interested in understanding how our brain parses complex acoustic information entering through our ears, integrates
it with our cognitive state and prior knowledge (attentional state, statistical priors, expectations about the world)
to guide perception and behavior.
Our goal is to translate this knowledge into better engineering systems for intelligent parsing of complex soundscapes, recognizing audio and speech sounds in noisy environments and detecting target events for improved medical diagnosis and efficient processing.
We explore these questions through tools of mathematical (signal processing) models, behavioral testing of human listeners (psychoacoustics) and neural recordings of brain activity (electroencephalography).