Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.
Title: An Efficient and Robust Multi-Stream Framework for End-to-End Speech Recognition
Abstract: In the voice-enabled domestic or meeting environments, distributed microphone arrays aim to process distant-speech interaction into text with high accuracy. However, with dynamic corruption of noises and reverberations or human movement present, there is no guarantee that any microphone array (stream) is constantly informative. In these cases, an appropriate strategy to dynamically fuse streams or select the most informative array is necessary.
The multi-stream paradigm in Automatic Speech Recognition (ASR) considers scenarios where parallel streams carry diverse or complementary task-related knowledge. Such streams could be defined as microphone arrays, frequency bands, various modalities or etc. Hence, a robust stream fusion is crucial to emphasize on more informative streams than corrupted ones, specially under unseen conditions. This thesis focuses on improving the performance and robustness of speech recognition in multi-stream scenarios.
In recent years, with the increasing use of Deep Neural Networks (DNNs) in ASR, End-to-End (E2E) approaches, which directly transcribe human speech into text, have received greater attention. In this thesis, a multi-stream framework is presented based on joint Connectionist Temporal Classification/Attention (CTC/ATT) E2E model, where parallel streams are represented by separate encoders. On top of the regular attention networks, a secondary stream-fusion network is to steer the decoder toward the most informative streams. Two representative frameworks are proposed, which are Multi-Encoder Multi-Array (MEM-Array) and Multi-Encoder Multi-Resolution (MEM-Res), respectively.
The MEM-Array model aims at improving the far-field ASR robustness using multiple microphone arrays which are activated by separate encoders. With an increasing number of streams (encoders) requiring substantial memory and massive amounts of parallel data, a practical two-stage training strategy is desgnated to address these issues. Furthermore, a two-stage augmentation scheme is present to improve the robustness of the multi-stream model, where small amount of parallel data is sufficient to achieve competitive results. In MEM-Res, two heterogeneous encoders with different architectures, temporal resolutions and separate CTC networks work in parallel to extract complementary information from same acoustics. Compared with the best single-stream performance, both models have achieved substantial improvement, which also outperform various conventional fusion strategies.
While proposed framework optimizes information in multi-stream scenarios, this thesis also studies the Performance Monitoring (PM) measures to predict if recognition result of an end-to-end model is reliable, without growth-truth knowledge. Four different PM techniques are investigated, suggesting that PM measures on attention distributions and decoder posteriors are well-correlated with true performances.
Hynek Hermansky, Department of Electrical and Computer Engineering
Shinji Watanabe, Department of Electrical and Computer Engineering
Najim Dehak, Department of Electrical and Computer Engineering
Gregory Sell, JHU Human Language Technology Center of Excellence
Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.
Title: Think Bigger: Empower Yourself to Change the World
Abstract: During this talk, I will share some of my experiences and ultimately challenge the audience to place their research into a greater context. We must actively pursue ways to innovate by expanding our thinking about how we positively impact society. I will explore how a kid from East Baltimore grew up and developed the tools, skills, and abilities to thrive in a career where he currently leverages the best technology and expertise from around the globe in order to translate ideas into solutions that solve some of the world’s most complex problems.
Bio: Dr. Charles Johnson-Bey is a Senior Vice President at Booz Allen Hamilton. He is a global leader in technology innovation and uniquely leverages the intersection of technology, strategy, and business to create & capture value, lead change and drive execution. Dr. Johnson-Bey has more than 25 years of engineering experience spanning cyber resilience, signal processing, system architecture, prototyping, and hardware. Prior to joining Booz Allen, he was a research engineer at Motorola Corporate Research Labs and Corning Incorporated and taught electrical engineering at Morgan State University. He also worked at Lockheed Martin Corporation for 17 years, where he galvanized the company’s cyber resources and led research and development activities with organizations including Oak Ridge National Laboratory, Microsoft Research, and the GE Global Research Center. He serves on the Whiting School of Engineering Advisory Board and the Electrical and Computer Engineering Advisory Committee, both at Johns Hopkins University. He is also on the Cybersecurity Institute Advisory Board for the Community College of Baltimore County. Dr. Johnson-Bey received a B.S. in Electrical and Computer Engineering from Johns Hopkins University and both an M.S. and Ph.D. in Electrical Engineering from the University of Delaware.
This event is co-hosted by the ECE Department and the Whiting School of Engineering.
Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.
Title: Medical Image Modality Synthesis and Resolution Enhancement Based on Machine Learning Techniques
Abstract: To achieve satisfactory performance from automatic medical image analysis algorithms such as registration or segmentation, medical imaging data with the desired modality/contrast and high isotropic resolution are preferred, yet they are not always available. We addressed this problem in this thesis using 1) image modality synthesis and 2) resolution enhancement.
The first contribution of this thesis is computed tomography (CT)-to-magnetic resonance imaging (MRI) image synthesis method, which was developed to provide MRI when CT is the only modality that is acquired. The main challenges are that CT has poor contrast as well as high noise in soft tissues and that the CT-to-MR mapping is highly nonlinear. To overcome these challenges, we developed a convolutional neural network (CNN) which is a modified U-net. With this deep network for synthesis, we developed the first segmentation method that provides detailed grey matter anatomical labels on CT neuroimages using synthetic MRI.
The second contribution is a method for resolution enhancement for a common type of acquisition in clinical and research practice, one in which there is high resolution (HR) in the in-plane directions and low resolution (LR) in the through-plane direction. The challenge of improving the through-plane resolution for such acquisitions is that the state-of-art convolutional neural network (CNN)-based super-resolution methods are sometimes not applicable due to lack of external LR/HR paired training data. To address this challenge, we developed a self super-resolution algorithm called SMORE and its iterative version called iSMORE, which are CNN-based yet do not require LR/HRpaired training data other than the subject image itself. SMORE/iSMOREcreate training data from the HR in-plane slices of the subject image itself, then train and apply CNNs to through-plane slices to improve spatial resolution and remove aliasing. In this thesis, we perform SMORE/iSMORE on multiple simulated and real data sets to demonstrate their accuracy and generalizability. Also, SMORE as a preprocessing step is shown to improve segmentation accuracy.
In summary, CT-to-MR synthesis, SMORE, and iSMORE were demonstrated in this thesis to be effective preprocessing algorithms for visual quality and other automatic medical image analysis such as registration or segmentation.
Jerry Prince, Department of Electrical and Computer Engineering
John Goutsias, Department of Electrical and Computer Engineering
Trac Tran, Department of Electrical and Computer Engineering
Title: Single Image Based Crowd Counting Using Deep Learning
Abstract: Estimating count and density maps from crowd images has a wide range of applications such as video surveillance, traffic monitoring, public safety and urban planning. In addition, techniques developed for crowd counting can be applied to related tasks in other fields of study such as cell microscopy, vehicle counting and environmental survey. The task of crowd counting and density map estimation from a single image is a difficult problem since it suffers from multiple issues like occlusions, perspective changes, background clutter, non-uniform density, intra-scene and inter-scene variations in scale and perspective. These issues are further exacerbated in highly congested scenes. In order to overcome these challenges, we propose a variety of different deep learning architectures that specifically incorporate various aspects such as global/local context information, attention mechanisms, specialized iterative and multi-level multi-pathway fusion schemes for combining information from multiple layers in a deep network. Through extensive experimentations and evaluations on several crowd counting datasets, we demonstrate that the proposed networks achieve significant improvements over existing approaches.
We also recognize the need for large amounts of data for training the deep networks and their inability to generalize to new scenes and distributions. To overcome this challenge, we propose novel semi-supervised and weakly-supervised crowd counting techniques that effectively leverage large amounts of unlabeled/weakly-labeled data. In addition to developing techniques with ability to learn from limited labeled data, we also introduce a new large-scale crowd counting dataset which can be used to train considerably larger networks. The proposed data consists of 4,372 high resolution images with 1.51 million annotations. We made explicit efforts to ensure that the images are collected under a variety of diverse scenarios and environmental conditions. The dataset provides a richer set of annotations like dots, approximate bounding boxes, blur levels, etc.
Title: Deep Learning Based Methods for Ultrasound Image Segmentation and Magnetic Resonance Image Reconstruction
Abstract: In recent years, deep learning (DL) algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. It has shown promising performances in many medical image analysis (MIA) problems, including classification, segmentation and reconstruction. However, the inherent difference between natural images and medical images (Ultrasound, MRI etc.) have hinder the performance of such DL-based method that originally designed for natural images. Another obstacle for DL-based MIA comes the availability of large-scale training dataset as it have shown that large and diverse dataset can effectively improve the robustness and generalization ability of DL networks.
In this thesis, we develop various deep learning-based approaches to address two medical image analysis problems. In the first problem, we focus on computer assisted orthopedic surgery (CAOS) applications that use ultrasound as intra-operative imaging modality. This problem requires an automatic and real-time algorithm to detect and segment bone surfaces and shadows in order to provide guidance for the orthopedic surgeon to a standardized diagnostic viewing plane with minimal artifacts. Due to the limitation of relatively small datasets and image differences from multiple ultrasound machines, we develop DL-based frameworks that leverage a local phase filtering technique and integrate it into the DL framework, thus improving the robustness.
Finally, we propose a fast and accurate Magnetic Resonance Imaging (MRI) image reconstruction framework using a novel Convolutional Recurrent Neural Network (CRNN). Extensive experiments and evaluation on knee and brain datasets have shown its outstanding results compared to the traditional compressed sensing and other DL-based methods. Furthermore, we extend this method to enable multi sequence-reconstruction where T2-weighted MRI image can provide guidance and improvement to the reconstruction of amid proton transfer-weighted
Carlos Castillo, Department of Electrical and Computer Engineering
Shanshan Jiang, Department of Radiology and Radiological Science
Ilker Hacihaliloglu, Department of Biomedical Engineering (Rutgers University)
Title: Unsupervised Domain Adaptation for Speaker Verification in the Wild
Abstract: Performance of automatic speaker verification (ASV) systems is very sensitive to mismatch between training (source) and testing (target) domains. The best way to address domain mismatch is to perform matched condition training – gather sufficient labeled samples from the target domain and use them in training. However, in many cases this is too expensive or impractical. Usually, gaining access to unlabeled target domain data, e.g., from open source online media, and labeled data from other domains is more feasible. This work focuses on making ASV systems robust to uncontrolled (‘wild’) conditions, with the help of some unlabeled data acquired from such conditions.
Given acoustic features from both domains, we propose learning a mapping function – a deep convolutional neural network (CNN) with an encoder-decoder architecture – between features of both the domains. We explore training the network in two different scenarios: training on paired speech samples from both domains and training on unpaired data. In the former case, where the paired data is usually obtained via simulation, the CNN is treated as a non-linear regression function and is trained to minimize L2 loss between original and predicted features from target domain. Though effective, we provide empirical evidence that this approach introduces distortions that affect verification performance. To address this, we explore training the CNN using adversarial loss (along with L2), which makes the predicted features indistinguishable from the original ones, and thus, improve verification performance.
The above framework, though effective, cannot be used to train the network on unpaired data obtained by independently sampling speech from both domains. In this case, we first train a CNN using adversarial loss to map features from source to target. We, then, map the predicted features back to the source domain using an auxiliary network, and minimize a cycle-consistency loss between the original and reconstructed source features.
To prevent the CNN from over-fitting when trained on limited amounts of data, we present a simple regularizing technique. Our unsupervised adaptation approach using feature mapping, also complements its supervised counterpart, where adaptation is done using labeled data from both domains. We focus on three domain mismatch scenarios: (1) sampling frequency mismatch between the domains, (2) channel mismatch, and (3) robustness to far-field and noisy speech acquired from wild conditions.
Title: Statistical Signal Processing Methods for Epigenetic Landscape Analysis
Abstract: Since the DNA structure was discovered in 1953, a great deal of effort has been put into studying this molecule in detail. We now know DNA comprises an organism’s genetic makeup and constitutes a blueprint for life. The study of DNA has dramatically increased our knowledge about cell function and evolution and has led to remarkable discoveries in biology and medicine.
Just as DNA is replicated during cell division, several chemical marks are also passed onto progeny during this process. Epigenetics studies these marks and represents a fascinating research area given their crucial role. Among all known epigenetic marks, 5mc DNA methylation is probably one of the most important ones given its well-established association with various biological processes, such as development and aging, and disease, such as cancer. The work in this dissertation focuses primarily on this epigenetic mark, although it has the potential to be applied to other heritable marks.
In the 1940s, Waddington introduced the term epigenetic landscape to conceptually describe cell pluripotency and differentiation. This concept lived in the abstract plane until Jenkinson et al. 2017 & 2018 estimated actual epigenetic landscapes from WGBS data, and the work led to startling results with biological implications in development and disease. Here, we introduce an array of novel computational methods that draw from that work. First, we present CPEL, a method that uses a variant of the original landscape proposed by Jenkinson et al., which, together with a new hypothesis testing framework, allows for the detection of DNA methylation imbalances between homologous chromosomes. Then, we present CpelTdm, a method that builds upon CPEL to perform differential methylation analysis between groups of samples using targeted bisulfite sequencing data. Finally, we extend the original probabilistic model proposed by Jenkinson et al. to estimate methylation landscapes and perform differential analysis from nanopore data.
Overall, this work addresses immediate needs in the study of DNA methylation. The methods presented here can lead to a better characterization of this critical epigenetic mark and enable biological discoveries with implications for diagnosing and treating complex human diseases.
Title: Deep Learning Based Face Image Synthesis
Abstract: Face image synthesis is an important problem in the biometrics and computer vision communities due to its applications in law enforcement and entertainment. In this thesis, we develop novel deep neural network models and associated loss functions for two face image synthesis problems, namely thermal to visible face synthesis and visual attribute to face synthesis.
In particular, for thermal to visible face synthesis, we propose a model which makes use of facial attributes to obtain better synthesis. We use attributes extracted from visible images to synthesize attribute-preserved visible images from thermal imagery. A pre-trained attribute predictor network is used to extract attributes from the visible image. Then, a novel multi-scale generator is proposed to synthesize the visible image from the thermal image guided by the extracted attributes. Finally, a pre-trained VGG-Face network is leveraged to extract features from the synthesized image and the input visible image for verification.
In addition, we propose another thermal to visible face synthesis method based on a self-attention generative adversarial network (SAGAN) which allows efficient attention-guided image synthesis. Rather than focusing only on synthesizing visible faces from thermal faces, we also propose to synthesize thermal faces from visible faces. Our intuition is based on the fact that thermal images also contain some discriminative information about the person for verification. Deep features from a pre-trained Convolutional Neural Network (CNN) are extracted from the original as well as the synthesized images. These features are then fused to generate a template which is then used for cross-modal face verification.
Regarding attribute to face image synthesis, we propose the Att2SK2Face model for face image synthesis from visual attributes via sketch. In this approach, we first synthesize a facial sketch corresponding to the visual attributes and then generate the face image based on the synthesized sketch. The proposed framework is based on a combination of two different Generative Adversarial Networks (GANs) – (1) a sketch generator network which synthesizes realistic sketch from the input attributes, and (2) a face generator network which synthesizes facial images from the synthesized sketch images with the help of facial attributes.
Finally, we propose another synthesis model, called Att2MFace, which can simultaneously synthesize multimodal faces from visual attributes without requiring paired data in different domains for training the network. We introduce a novel generator with multimodal stretch-out modules to simultaneously synthesize multimodal face images. Additionally, multimodal stretch-in modules are introduced in the discriminator which discriminates between real and fake images.
Title: Machine Learning for Beamforming in Ultrasound, Radar, and Audio
Abstract: Multi-sensor signal processing plays a crucial role in the working of several everyday technologies, from correctly understanding speech on smart home devices to ensuring aircraft fly safely. A specific type of multi-sensor signal processing called beamforming forms a central part of this thesis. Beamforming works by combining the information from several spatially distributed sensors to directionally filter information, boosting the signal from a certain direction but suppressing others. The idea of beamforming is key to the domains of ultrasound, radar, and audio.
Machine learning, succinctly defined by Tom Mitchell as “the study of algorithms that improve automatically through experience” is the other central part of this thesis. Machine learning, especially its sub-field of deep learning, has enabled breakneck progress in tackling several problems that were previously thought intractable. Today, machine learning powers many of the cutting edge systems we see on the internet for image classification, speech recognition, language translation, and more.
In this dissertation, we look at beamforming pipelines in ultrasound, radar, and audio from a machine learning lens and endeavor to improve different parts of the pipelines using ideas from machine learning. Starting off in the ultrasound domain, we use deep learning as an alternative to beamforming in ultrasound and improve the information extraction pipeline by simultaneously generating both a segmentation map and B-mode image of high quality directly from raw received ultrasound data.
Next, we move to the radar domain and study how deep learning can be used to improve signal quality in ultra-wideband synthetic aperture radar by suppressing radio frequency interference, random spectral gaps, and contiguous block spectral gaps. By training and applying the networks on raw single-aperture data prior to beamforming, it can work with myriad sensor geometries and different beamforming equations, a crucial requirement in synthetic aperture radar.
Finally, we move to the audio domain and derive a machine learning inspired beamformer to tackle the problem of ensuring the audio captured by a camera matches its visual content, a problem we term audiovisual zoom. Unlike prior work which is capable of only enhancing a few individual directions, our method enhances audio from a contiguous field of view.
Title: A Unified Visual Saliency Model for Neuromorphic Implementation
Abstract: Although computer capabilities have expanded tremendously, a significant wall remains between the computer and the human brain. The brain can process massive amounts of information obtained from a complex environment and control the entire body in real time with low energy consumption. This thesis tackles this mystery by modeling and emulating how the brain processes information based on the available knowledge of biological and artificial intelligence as studied in neuroscience, cognitive science, computer science, and computer engineering.
Saliency modeling relates to visual sense and biological intelligence. The retina captures and sends much data about the environment to the brain. However, as the visual cortex cannot process all the information in detail at once, the early stages of visual processing discard unimportant information. Because only the fovea has high-resolution imaging, individuals move their eyeballs in the direction of the important part of the scene. Therefore, eye movements can be thought of as an observable output of the early visual process in the brain. Saliency modeling aims to understand this mechanism and predict eye fixations.
Researchers have built biologically plausible saliency models that emulate the biological process from the retina through the visual cortex. Although many saliency models have been proposed, most are not bio-realistic. This thesis models the biological mechanisms for the perception of texture, depth, and motion. While texture plays a vital role in the perception process, defining texture in a mathematical way is not easy. Thus, it is necessary to build an architecture of texture processing based on the biological perception mechanism. Binocular stereopsis is another intriguing function of the brain. While scholars have evaluated many computational algorithms for stereovision, pursuing biological plausibility means implementing a neuromorphic method into a saliency model. Motion is another critical clue that helps animals survive. In this thesis, the motion feature is implemented in a bio-realistic way based on neurophysiological observation.
Moreover, the thesis will integrate these processes and propose a unified saliency model that can handle 3D dynamic scenes in a similar way to how the brain deals with the real environment. Thus, this investigation will use saliency modeling to examine intriguing properties of human visual processing and discuss how the brain achieves this remarkable capability.