Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.
Title: Bio-Mimetic Sensory Mapping with Attention for Auditory Scene Analysis
Abstract: The human auditory system performs complex auditory tasks such as having a conversation in a busy cafe or picking the melodic line of a particular instrument in an ensemble orchestra, with remarkable ease. The human auditory system also exhibits the ability to effortlessly adapt to constantly changing conditions and novel stimulus. The human auditory system achieves these through complex neuronal processes. First the low dimensional signal representing the acoustic stimulus is mapped to a higher dimensional space through a series of feed-forward neuronal transformations; wherein the different auditory objects in the scene are discernible. These feed-forward processes are then further complemented by the top-down processes like attention, driven by the cognitive regions to modulate the feed-forward processes in a manner that shines the spotlight on the object of interest; the interlocutor in the example of a busy cafe or the instrument of interest in the ensemble orchestra.
In this work, we explore leveraging these mechanisms observed in the mammalian brain, within computational frameworks, for addressing various auditory scene analysis tasks such as speech activity detection, environmental sound classification and source separation. We develop bio-mimetic computational strategies to model the feed-forward sensory mapping processes as well as the corresponding complementary top-down mechanisms capable of modulating the feed-forward processes during attention.
In the first part of this work, we show using Gabor filters as an approximation for the feed-forward processes, that retuning the feed-forward processes under top-down attentional feedback are extremely potent in enabling robust behavior in detecting speech activity. We introduce the notion of memory to represent prior knowledge of the acoustic objects and show that memories of objects can be used to deploy the necessary top-down feedback. Next, we expand the feed-forward processes using data-driven distributed deep belief system consisting of multiple streams to capture the stimulus from different spectrotemporal resolutions, a feature observed in the human auditory system. We show that such a distributed system with inherent redundancies, further complemented by top-down attentional mechanisms using distributed object memories allow for robust classification of environmental sounds in mismatched conditions. Finally, we show that incorporating the ideas of distributed processing and attentional mechanisms using deep neural networks leads to state-of-the-art performance for even complex tasks such as source separation. Further, we show that in such a distributed system, the sum of the parts are better than the individual parts and that this aspect can be used to generate real-time top-down feedback; which in turn can be used to adapt the network to novel conditions during inference.
Overall, the results of the work show that leveraging theses biologically inspired mechanisms within computational frameworks lead to enhanced robustness and adaptability to novel conditions, traits of the human auditory system that we sought to emulate.
Committee Members
Mounya Elhilali, Department of Electrical and Computer Engineering
Najim Dehak, Department of Electrical and Computer Engineering
Rama Chellappa, Department of Electrical and Computer Engineering