Calendar

Feb
18
Tue
Dissertation Defense: Nagaraj Mahajan @ Hackerman Hall B-17
Feb 18 @ 3:00 pm – 5:00 pm
Dissertation Defense: Nagaraj Mahajan @ Hackerman Hall B-17

Title: Neural Circuit Mechanisms of Stimulus Selection Underlying Spatial Attention

Thesis Committee: Shreesh P. Mysore, Hynek Hermansky, Mounya Elhilali, Ralph Etienne-Cummings

Abstract: Humans and animals routinely encounter competing pieces of information in their environments, and must continually select the most salient in order to survive and behave adaptively. Here, using computational modeling, extracellular neural recordings, and focal, reversible silencing of neurons in the midbrain of barn owls, we uncovered how two essential computations underlying competitive selection are implemented in the brain: a) the ability to select the most salient stimulus among all pairs of stimulus locations, and b) the ability to signal the most salient stimulus categorically.

We first discovered that a key inhibitory nucleus in the midbrain attention network, called isthmi pars magnocellularis (Imc), encodes visual space with receptive fields that have multiple excitatory hotspots (‘‘lobes’’). Such (previously unknown) multilobed encoding of visual space is necessitated for selection at all location-pairs in the face of scarcity of Imc neurons. Although distributed seemingly randomly, the RF lobe-locations are optimized across the high-firing Imc neurons, allowing them to combinatorially solve selection across space. This combinatorially optimized inhibition strategy minimizes metabolic and wiring costs.

Next, we discovered that a ‘donut-like’ inhibitory mechanism in which each competing option suppresses all options except itself is highly effective at generating categorical responses. It surpasses motifs of feedback inhibition, recurrent excitation, and divisive normalization used commonly in decision-making models. We demonstrated experimentally not only that this mechanism operates in the midbrain spatial selection network in barn owls, but also that it is required for categorical signaling by it. Moreover, the pattern of inhibition in the midbrain forms an exquisitely structured ‘multi-holed’ donut consistent with this network’s combinatorial inhibitory function (computation 1).

Our work demonstrates that the vertebrate midbrain uses seemingly carefully optimized structural and functional strategies to solve challenging computational problems underlying stimulus selection and spatial attention at all location pairs. The neural motifs discovered here represent circuit-based solutions that are generalizable to other brain areas, other forms of behavior (such as decision-making, action selection) as well as for the design of artificial systems (such as robotics, self-driving cars) that rely on the selection of one among many options.

 

Feb
27
Thu
Thesis Proposal: Raghavendra Pappagari @ Hackerman Hall B-17
Feb 27 @ 3:00 pm
Thesis Proposal: Raghavendra Pappagari @ Hackerman Hall B-17

Title: Towards a better understanding of spoken conversations: Assessment of sentiment and emotion

Abstract: In this talk, we present our work on understanding the emotional aspects of spoken conversations. Emotions play a vital role in our daily life as they help us convey information impossible to express verbally to other parties.

While humans can easily perceive emotions, these are notoriously difficult to define and recognize by machines. However, automatically detecting the emotion of a spoken conversation can be useful for a diverse range of applications such as human-machine interaction and conversation analysis. In this work, we considered emotion recognition in two particular scenarios. The first scenario is predicting customer sentiment/satisfaction (CSAT) in a call center conversation, and the second consists of emotion prediction in short utterances.

CSAT is defined as the overall sentiment (positive vs. negative) of the customer about his/her interaction with the agent. In this work, we perform a comprehensive search for adequate acoustic and lexical representations.

For acoustic representation, we propose to use the x-vector model, which is known for its state-of-the-art performance in the speaker recognition task. The motivation behind using x-vectors for CSAT is we observed that emotion information encoded in x-vectors affected speaker recognition performance. For lexical, we introduce a novel method, CSAT Tracker, which computes the overall prediction based on individual segment outcomes. Both methods rely on transfer learning to obtain the best performance. We classified using convolutional neural networks combining the acoustic and lexical features. We evaluated our systems on US English telephone speech from call center data. We found that lexical models perform better than acoustic models and fusion of them provided significant gains. The analysis of errors uncovers that the calls where customers accomplished their goal but were still dissatisfied are the most difficult to predict correctly. Also, we found that the customer’s speech is more emotional compared to the agent’s speech.

For the second scenario of predicting emotion, we present a novel approach based on x-vectors. We show that adapting the x-vector model for emotion recognition provides the best-published results on three public datasets.

Mar
5
Thu
Thesis Proposal: Matthew Maciejewski @ Hackerman Hall B-17
Mar 5 @ 3:00 pm
Thesis Proposal: Matthew Maciejewski @ Hackerman Hall B-17

Title: Single-Channel Speech Separation in Noisy and Reverberant Conditions

Abstract: An inevitable property of multi-party conversations is that more than one speaker will end up speaking simultaneously for portions of time. Many speech technologies, such as automatic speech recognition and speaker identification, are not designed to function on overlapping speech and suffer severe performance degradation under such conditions. Speech separation techniques aim to solve this problem by producing a separate waveform for each speaker in an audio recording with multiple talkers speaking simultaneously. The advent of deep neural networks has resulted in strong performance gains on the speech separation task. However, training and evaluation has been nearly ubiquitously restricted to a single dataset of clean, near-field read speech, not representative of many multi-person conversational settings which are frequently recorded on room microphones, introducing noise and reverberation. Due to the degradation of other speech technologies in these sorts of conditions, speech separation systems are expected to suffer a decrease in performance as well.

The primary goal of this proposal is to develop novel techniques to improve speech separation in noisy and reverberant recording conditions. One core component of this work is the creation of additional synthetic overlap corpora spanning a range of more realistic and challenging conditions. The lack of appropriate data necessitates a first step of creating appropriate conditions with which to benchmark the performance of state-of-the-art methods in these more challenging conditions. Another proposed line of investigation is the integration of speech separation techniques with speech enhancement, the task of enhancing a speech signal through the removal of noise or reverberation. This is a natural combination due to similarities in problem formulation and general approach. Finally, we propose an investigation into the effectiveness of speech separation as a pre-processing step to speech technologies, such as automatic speech recognition, that struggle with overlapping speech, as well as tighter integration of speech separation with these “downstream” systems.

Mar
12
Thu
Dissertation Defense: Pramuditha Perera @ Malone Hall G33/35
Mar 12 @ 3:00 pm
Dissertation Defense: Pramuditha Perera @ Malone Hall G33/35

University policy at this present time: Students and faculty CAN attend dissertation defenses as long as there are fewer than 25 people.

Title: Deep Learning Based Novelty Detection

Abstract: In recent years, intelligent systems powered by artificial intelligence and computer vision that perform visual recognition have gained much attention. These systems observe instances and labels of known object classes during training and learn association patterns that can be used during inference. A practical visual recognition system should first determine whether an observed instance is from a known class. If it is from a known class, then the identity of the instance is queried through classification. The former process is commonly known as novelty detection (or novel class detection) in the literature. Given a set of image instances from known classes, the goal of novelty detection is to determine whether an observed image during inference belongs to one of the known classes.

In this thesis, deep learning-based approaches to solve novelty detection is studied under four different settings. In the first two settings, the availability of out-of-distributional data (OOD) is assumed. With this assumption, novelty detection can be studied for cases where there are multiple known classes and a single known class separately. These two problem settings are referred to as Multi-class novelty detection with OOD data and one-class novelty detection with OOD data in the literature, respectively. It is also possible to study this problem in a more constrained setting where only the data from known classes are considered for training. When there exist multiple classes in this setting novelty detection problem is known as Multiple-class novelty detection or Open-set recognition. On the other hand, when only a single class exists it is known as one-class novelty detection.

Finally, we study a practical application of novelty detection in mobile Active Authentication (AA).   For a  practical AA-based novelty detector, latency and efficiency are as important as the detection accuracy. Solutions are presented for the problem of quickly detecting intrusions with lower false detection rates in mobile AA systems with higher resource efficiency. Bayesian and Minimax versions of the Quickest Change Detection (QCD) algorithms are introduced to quickly detect intrusions in mobile AA systems. These algorithms are extended with an update rule to facilitate low-frequency sensing which leads to low utilization of resources.

Committee Members: Vishal Patel, Trac Tran, Najim Dehak

Mar
18
Wed
Dissertation Defense: Yan Cheng @ Malone Hall G33/35
Mar 18 @ 2:00 pm
Dissertation Defense: Yan Cheng @ Malone Hall G33/35

Taking place remotely. Email Belinda Blinkoff for more information.

Title: Engineering Earth-Abundant Colloidal Plasmonic and Semiconductor Nanomaterials for Solar Energy Harvesting and Detection Applications

Abstract: Colloidal nanomaterials have shown intriguing optical and electronic properties, making them important building blocks for a variety of applications, including photocatalysis, photovoltaics, and photodetectors. Their morphology and composition are effective tuning knobs for achieving desirable spectral characteristics for specific applications. In addition, they can be synthesized using solution-processed methods which possess the advantages of low cost, facile fabrication, and compatibility with building flexible devices. There is an ongoing quest for better colloidal materials with superior properties and high natural abundance for commercial viability. This thesis focuses on three such materials classes and applications: 1) studying the photophysical properties of earth-abundant plasmonic alumionum nanoparticles, 2) tailoring the optical profiles of semiconductor quantum dot solar cells with near-infrared sensitivity, and 3) using one-dimensional nanostructures for photodetector applications. A variety of analytical techniques and simulations are employed for characterization of both the morphology and optical properties of the nanostructures and for evaluating the performance of nanomaterial-based optoelectronic devices.

The first experimental section of this thesis consists of a systematic study of electron relaxation dynamics in solution-processed large aluminum nanocrystals. Transient absorption measurement are used to obtain the important characteristic relaxation timescales for each thermalization process. We show that several of the relevant timescales in aluminum differ from those in analogous noble metal nanoparticles and proposed that surface modification could be a useful tool for tuning heat transfer rates between the nanostructures and solvent. Further systematic studies on the relaxation dynamics in aluminum nanoparticles with tunable sizes show size-dependent phonon vibrational and damping characteristics that are influenced by size polydispersity, surface oxidation, and the presence of organic capping layers on the particles. These studies are significant first steps in demonstrating the feasibility of using aluminum nanomaterials for efficient photocatalysis.

The next section summarizes studies on the design and fabrication of multicolored PbS-based quantum dot solar cells. Specifically, thin film interference effects and multi-objective optimization methods are used to generate cell designs with controlled reflection and transmission spectra resulting in programmable device colors or visible transparency. Detailed investigations into the trade-off between the attainable color or transparency and photocurrent are discussed. The results of this study could be used to enable solar cell window-coatings and other controlled-color optoelectronic devices.

The last experimental section of thesis describes work on using 1D antimony selenide nanowires for flexible photodetector applications. A one-pot solution-based synthetic method is developed for producing a molecular ink which allows fabrication of devices on flexible substrates. Thorough characterization of the nanowire composition and morphology are performed. Flexible, broadband antimony selenide nanowire photodetectors are fabricated and show fast response and good mechanical stability. With further tuning of the nanowire size, spectral selectivity should be achievable. The excellent performance of the nanowire photodetectors is promising for the broad implementation of semiconductor inks in flexible photodetectors and photoelectronic switches.

Committee Members: Susanna Thon, Amy Foster, Jin Kang

Mar
26
Thu
Seminar: David Harwath, Massachusetts Institute of Technology
Mar 26 @ 3:00 pm
Seminar: David Harwath, Massachusetts Institute of Technology

This presentation happened remotely. Follow this link to view it. Please note that the presentation doesn’t start until 30 minutes into the video.

Title: Learning Spoken Language Through Vision

Abstract: Humans learn spoken language and visual perception at an early age by being immersed in the world around them. Why can’t computers do the same? In this talk, I will describe our work to develop methodologies for grounding continuous speech signals at the raw waveform level to natural image scenes. I will first present self-supervised models capable of jointly discovering spoken words and the visual objects to which they refer, all without conventional annotations in either modality. Next, I will show how the representations learned by these models implicitly capture meaningful linguistic structure directly from the speech signal. Finally, I will demonstrate that these models can be applied across multiple languages, and that the visual domain can function as an “interlingua,” enabling the discovery of word-level semantic translations at the waveform level.

Bio: David Harwath is a research scientist in the Spoken Language Systems group at the MIT Computer Science and Artificial Intelligence Lab (CSAIL). His research focuses on multi-modal learning algorithms for speech, audio, vision, and text. His work has been published at venues such as NeurIPS, ACL, ICASSP, ECCV, and CVPR. Under the supervision of James Glass, his doctoral thesis introduced models for the joint perception of speech and vision. This work was awarded the 2018 George M. Sprowls Award for the best Ph.D. thesis in computer science at MIT.

He holds a Ph.D. in computer science from MIT (2018), a S.M. in computer science from MIT (2013), and a B.S. in electrical engineering from UIUC (2010).

Apr
2
Thu
Seminar: Shinji Watanabe
Apr 2 @ 3:00 pm – 4:00 pm
Seminar: Shinji Watanabe

This presentation is happening remotely. Click this link as early as 15 minutes before the scheduled start time of the presentation to watch in a Zoom meeting.

Title: Interpretable End-to-End Neural Network for Audio and Speech Processing

Abstract: This talk introduces extensions of the basic end-to-end automatic speech recognition (ASR) architecture by focusing on its integration function to tackle major problems faced by current ASR technologies in adverse environments including cocktail party and data sparseness problems. The first topic is to integrate microphone-array signal processing, speech separation, and speech recognition in a single neural network to realize multichannel multi-speaker ASR for the cocktail party problem. Our architecture is carefully designed to maintain the role of each module as a differentiable subnetwork so that we can jointly optimize the whole network but still keep the interpretability of each subnetwork including the speech separation, speech enhancement, and acoustic beamforming abilities in addition to ASR. The second topic is based on semi-supervised training using cycle-consistency, which enables us to leverage unpaired speech and/or text data by integrating ASR with text-to-speech (TTS) within the end-to end framework. This scheme can be regarded as an interpretable disentanglement of audio signals with explicit decomposition of linguistic characteristics by ASR and speaker and speaking style characteristics by speaker embedding. These explicitly decomposed characteristics are converted back to the original audio signals by neural TTS; thus we form an acoustic feedback loop based on speech recognition and synthesis like human hearing, and both components can be jointly optimized only with the audio data.

Thesis Proposal: John Franklin
Apr 2 @ 3:00 pm
Thesis Proposal: John Franklin

This presentation will be happening remotely over Zoom. Click this link as early as 15 minutes before the scheduled start time of the presentation to watch in a Zoom meeting.

Meeting ID: 618-589-385
Meeting Password: 261713

Title: Compressive Sensing for Wireless Systems with Massive Antenna Arrays

Abstract: Over the past two decades the world has enjoyed exponential growth in wireless connectivity that has fundamentally changed the way people communicate and has opened the door to limitless new applications. With the advent of 5G, users will now begin to enjoy enhanced mobile broadband links supporting peak rates of over 10 gigabit per second. The 5G capability will also support massive machine type communications and less than one millisecond latency communications to support ultra-reliable low communication. Continuing to achieve greater increases in system capacity requires the continual advancement of new technology to make efficient use of finite spectrum resources.

Researchers have studied Multiple-Input-Multiple-Output (MIMO) communications over the last several decades as a way to increase system capacity. The MIMO channel is composed of multiple transmit (input) antennas and multiple (output) receive antennas. The channel is represented as the impulse response between each transmit and receive antenna pair. In the simplest of channels, the pairwise impulse response reduces to a single coefficient. Many theoretical MIMO results rely on Rayleigh channels featuring independently distributed complex Gaussian variables as channel coefficients.

The concept of Massive MIMO emerged a decade ago and is a leading technology in 5G wireless. Massive MIMO features base stations that have massive antenna arrays that simultaneously service many users. The Massive MIMO array has many more antennas than users. Unlike traditional phased array antennas, Massive MIMO arrays have all (or a large portion of) their antennas connected to receive chains for baseband processing. Successfully decoding each user’s data stream requires estimates of the propagation channel. Channel estimation is usually aided through the use of pilot signals that are known to both the user terminal and the base station. Simultaneously estimating the channel matrix between each user and each antenna in a massive MIMO array creates challenges for pilot sequence design. More channel resources reserved for pilot sequences for channel estimation result in fewer resources for user data.

Several efforts have shown that the mm wave massive MIMO channel exhibits several sparse features. The number of distinct and resolvable paths between a user and a massive MIMO array is generally much less than the number of base station antennas. Early theoretical MIMO work relied on Rayleigh channels as they are useful for closed form solutions. In reality, the Massive MIMO mm wave channel is low rank as it can be modeled by a smaller number of resolvable multipath components. This opens opportunities for new channel estimation techniques using compressive sensing and sparse recovery.

Although Massive MIMO will be featured in future 5G services, there is still much untapped potential. Through developing better channel estimation schemes, additional system throughput can be achieved. This work will consider:

  • Generation of sparse mm Wave channels for analysis
  • Multi-user pilot design approaches for measuring the massive MIMO channel
  • Channel estimates formed through sparse recovery methods
Apr
9
Thu
Seminar: Gopala Anumanchipalli, University of California, San Francisco
Apr 9 @ 3:00 pm
Seminar: Gopala Anumanchipalli, University of California, San Francisco

This was a virtual seminar that can be viewed by clicking here

Title: Unifying Human Processes and Machine Models for Spoken Language Interfaces

Abstract: Recent years have witnessed tremendous progress in digital speech interfaces for information access (eg., Amazon’s Alexa, Google Home etc). The commercial success of these applications is hailed as one of the major achievements of the “AI” era. Indeed these accomplishments are made possible only by sophisticated deep learning models trained on enormous amounts of supervised data over extensive computing infrastructure. Yet these systems are not robust to variations (like accent, out of vocabulary words etc), remain uninterpretable, and fail in unexpected ways.  Most important of all, these systems cannot be easily extended speech and language disabled users, who would potentially benefit the most from availability of such technologies. I am a speech scientist interested in computational modelling of the human speech communication system  towards building intelligent spoken language systems. I will present my research where I’ve tapped into the human speech communication processes to robust build spoken language systems — specifically, theories of phonology and physiological data including cortical signals in humans as they produce fluent speech. The insights from these studies reveal elegant organizational principles and computational mechanisms employed by the human brain for fluent speech production, the most complex of motor behaviors. These findings hold the key to the next revolution in human-inspired, human-compatible spoken language technologies that, besides alleviating the problems faced by current systems, can meaningfully impact the lives of millions of people with speech disability.

Bio: Gopala Anumanchipalli, PhD, is a researcher at the Department of Neurological Surgery and the Weill Institute for Neurosciences at the University of California, San Francisco. His interests in i) understanding neural mechanisms of human speech production towards developing next generation Brain-Computer Interfaces, and ii) Computational modelling of human speech communication mechanisms towards building robust speech technologies. Earlier, Gopala was a postdoctoral fellow at UCSF working with Edward F Chang, MD and has previously received PhD in Language and Information Technologies from Carnegie Mellon University working with Prof. Alan Black on speech synthesis.

Apr
16
Thu
Thesis Proposal: Golnoosh Kamali
Apr 16 @ 3:00 pm
Thesis Proposal: Golnoosh Kamali

This event will occur remotely in a Zoom meeting at this link. Please do not join the meeting until at least 15 minutes before the presentation is scheduled to start. 

Title: Using Systems Modeling to Localize the Seizure Onset Zone in Epilepsy Patients from Single Pulse Electrical Stimulation Recordings

Abstract: Surgical resection of the seizure onset zone (SOZ) could potentially lead to seizure-freedom in medically refractory epilepsy patients. However, localizing the SOZ can be a time consuming and tedious process involving visual inspection of intracranial electroencephalographic (iEEG) recordings captured during passive patient monitoring. Single pulse electrical stimulation (SPES) is currently performed on patients undergoing invasive EEG monitoring for the main purposes of mapping functional brain networks such as language and motor networks. We hypothesize that evoked responses from SPES can also be used to localize the SOZ as they may express the natural frequencies and connectivity of the iEEG network. To test our hypothesis, we construct patient specific single-input multi-output transfer function models from the evoked responses recorded from eight epilepsy patients that underwent SPES evaluation and iEEG monitoring. Our preliminary results suggest that the stimulation electrodes that produced the highest system gain, as measured by the 𝓗∞ norm, correspond to those electrodes clinically defined in the SOZ in successfully treated patients.

Back to top