This presentation will be taking place remotely. Follow this link to enter the Zoom meeting where it will be hosted. Do not enter the meeting before 1:15 PM EDT.
Title: Extending the Potential of Thin-film Optoelectronics via Optical Engineering
Abstract: Optoelectronics based on nanomaterials have become a research focus in recent years, and this technology bridges the fields of solid-state physics, electrical engineering and materials science. The rapid development in optoelectronic devices in the last century has both benefited from and spurred advancements in the science and engineering of pho- ton detection and manipulation, image sensing, high-efficiency and high-power-density light emission, displays, communications and renewable energy harvesting. A particularly promising material class for optoelectronics is colloidal nanomaterials, due to their functionality, cost -efficiency and even new physics, thanks to their exotic properties in the areas of light-matter interaction, low-dimensionality, and solution-processability which dramatically reduces the time and cost required to fabricate thin film devices, and at the same time provides wide compatibility with existing materials interfaces and device structures. This thesis focuses on exploring and assessing the capabilities of lead sulfide quantum dot-based solar cells and photodetectors. The discussion involves advances in techniques such as implementing novel photonic structures, designing and building novel characterization systems and methods, and coupling to external optical structures and components.
This thesis comprises three sections. The first section focuses on the design and adaption of photonic structures to tailor the function and response of photovoltaics and other absorption-based optoelectronics for specific applications. in the first part, we introduce consideration of complete multi-layer thin film interference effects into the design of solar cells. By numerical calculation and optimization of the film thicknesses as well as the precise fabrication control, devices with specific target colors or optical transparency levels were achieved. In the second part, we investigate the presence of 2D photonic crystal bands in absorbing materials that can be readily incorporated into nanomaterial thin films through nanostructuring of the material. We carried out simulations and theoretical analyses and proposed a method to realize simultaneous selectivity in the device reflection, transmission and absorption spectra that are critical for optoelectronic applications.
The next section focuses on designing and building a multi-modal microscopy system for thin-film optoelectronic devices, accompanied with analyses and explanation of complex experimental data. The goal of the system was to provide simultaneous 2D spatial measurements of, including but not limited to, photoluminescence spectra, time- resolved photocurrent and photovoltage responses, and a rich variety of all the possible combinations of these measurements and their associated derived quantities, collected with micrometer resolution. The multi-dimensional data helped us understand the intercorrelation between local defective regions in films and the entire device behavior, as well as a more comprehensive profile of mutual relationships between solar cell figures of merit.
In the last section, we discuss a new implementation of miniature solar concentrator arrays for lead sulfide quantum dot solar cells. First, we design and analyze the effects of a medium concentration ratio lens-type concentrator made from polydimethylsiloxane, a flexible organosilicon polymer. The concentrators were designed and optimized with the aid of ray-tracing simulation tools to achieve the best compatibility with colloidal nanomaterial-based solar cells. Experimentally, we produced an integrated concentrator system delivering 20-fold current and power enhancements close to the theoretical pre- dictions, and also used our concentrator measurements to explain the rarely explored carrier dynamics critical to high-power operation of thin film solar cells. Next, we design a wide-acceptance-angle dielectric solar concentrator that can be adapted to many types of high- efficiency small-area solar cells. The design was generated using rigorous optical models that define the behaviors of light rays and was verified with ray-tracing optical simulations to yield results for the full annual 2D time-resolved collectible power for the resulting system. Finally, we discuss strategies for further extending the possibilities of nanomaterial-based optoelectronics for future challenges in energy production and related applications.
Susanna Thon – Department of Electrical and Computer Engineering
Jacob Khurgin – Department of Electrical and Computer Engineering
Mark Foster – Department of Electrical and Computer Engineering
Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.
Title: Transfer function models of cortico-cortical evoked potentials for the localization of seizures in medically refractory epilepsy patients
Abstract: Surgical resection of the seizure onset zone (SOZ) could potentially lead to seizure-freedom in medically refractory epilepsy (MRE) patients. However, localizing the SOZ is a time consuming, subjective process involving visual inspection of intracranial electroencephalographic (iEEG) recordings captured during invasive passive patient monitoring. Cortical stimulation is currently performed on patients undergoing invasive EEG monitoring for the main purpose of mapping functional brain networks such as language and motor networks. We hypothesized that the evoked responses from single pulse electrical stimulation (SPES) can be used to localize the SOZ as they may express the natural frequencies and connectivity of the iEEG network. We constructed patient specific transfer function models from evoked responses recorded from 22 MRE patients that underwent SPES evaluation and iEEG monitoring. We then computed the frequency and connectivity dependent “peak gain” of the system, as measured by the H_∞ norm from systems theory, and the corresponding “floor gain,” which is the gain at which the H_∞ dipped 3dB below the DC gain. In cases for which clinicians had high confidence in localizing the SOZ, the highest peak gain transfer functions with the smallest “floor gains” corresponded to when the clinically annotated SOZ and early spread regions were stimulated. In more complex cases, there was a large spread of the peak gains when the clinically annotated SOZ was stimulated. Interestingly for patients who had successful surgeries, our ratio of peak-to-floor (PF) gains, agreed with clinical localization, no matter the complexity of the case. For patients with failed surgeries, the PF ratio did not match clinical annotations. Our findings suggest that transfer function gains and their corresponding frequency responses computed from SPES evoked responses may improve SOZ localization and thus surgical outcomes.
Sridevi V. Sarma, Department of Biomedical Engineering
Joon Y. Kang, Department of Neurology
Archana Venkataraman, Department of Electrical and Computer Engineering
Nathan E. Crone, Department of Neurology
Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.
Title: Circuits and Architecture for Bio-Inspired AI Accelerators
Abstract: Technological advances in microelectronics envisioned through Moore’s law have led to more powerful processors that can handle complex and computationally intensive tasks. Nonetheless, these advancements through technology scaling have come at an unfavorable cost of significantly larger power consumption, which has posed challenges for data processing centers and computers at the scale. Moreover, with the emergence of mobile computing platforms constrained by power and bandwidth for distributed computing, the necessity for more energy-efficient scalable local processing has become more significant.
Unconventional Compute-in-Memory (CiM) architectures such as the analog winner-takes-all associative-memory, the Charge-Injection Device (CID) processor, and analog-array processing have been proposed as alternatives. Unconventional charge-based computation has been employed for neural network accelerators in the past, where impressive energy efficiency per operation has been attained in 1-bit vector-vector multiplications (VMMs), and in recent work, multi-bit vector-vector multiplications. A similar approach was used in earlier work, where a charge-injection device array was utilized to store binary coded vectors, and computations were done using binary or multi-bit inputs in the charge domain; computation is carried out by counting quanta of charge at the thermal noise limit, using packets of about 1000 electrons. These systems are neither analog nor digital in the traditional sense but employ mixed-signal circuits to count the packets of charge and hence we call them Quasi-Digital. By amortizing the energy costs of the mixed-signal encoding/decoding over compute-vectors with a large number of elements, high energy efficiencies can be achieved.
In this dissertation, I present a design framework for AI accelerators using scalable compute-in-memory architectures. On the device level, two primitive elements are designed and characterized as target storage technologies: (i) a multilevel non-volatile computational cell and (ii) a pseudo Dynamic Random-Access Memory (pseudo-DRAM) computational bit-cell. Experimental results in deep-submicron CMOS processes demonstrate successful operation; subsequently, behavioral models were developed and employed in large-scale system simulations and emulations. Thereafter, at the level of circuit description, compute-in-memory crossbars and mixed-signal circuits were designed, allowing seamless connectivity to digital controllers. At the level of data representation, both binary and stochastic-unary coding are used to compute Vector-Vector Multiplications (VMMs) at the array level, demonstrating successful experimental results and providing insight into the integration requirements that larger systems may demand. Finally, on the architectural level, two AI accelerator architectures for data center processing and edge computing are discussed. Both designs are scalable multi-core Systems-on-Chip (SoCs), where vector-processor arrays are tiled on a 2-layer Network-on-Chip (NoC), enabling neighbor communication and flexible compute vs. memory trade-off. General purpose Arm/RISCV co-processors provide adequate bootstrapping and system-housekeeping and a high-speed interface fabric facilitates Input/Output to main memory.
Andreas Andreou, Department of Electrical and Computer Engineering
Ralph Etienne-Cummings, Department of Electrical and Computer Engineering
Philippe Pouliquen, Department of Electrical and Computer Engineering
Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.
Title: An Efficient and Robust Multi-Stream Framework for End-to-End Speech Recognition
Abstract: In the voice-enabled domestic or meeting environments, distributed microphone arrays aim to process distant-speech interaction into text with high accuracy. However, with dynamic corruption of noises and reverberations or human movement present, there is no guarantee that any microphone array (stream) is constantly informative. In these cases, an appropriate strategy to dynamically fuse streams or select the most informative array is necessary.
The multi-stream paradigm in Automatic Speech Recognition (ASR) considers scenarios where parallel streams carry diverse or complementary task-related knowledge. Such streams could be defined as microphone arrays, frequency bands, various modalities or etc. Hence, a robust stream fusion is crucial to emphasize on more informative streams than corrupted ones, specially under unseen conditions. This thesis focuses on improving the performance and robustness of speech recognition in multi-stream scenarios.
In recent years, with the increasing use of Deep Neural Networks (DNNs) in ASR, End-to-End (E2E) approaches, which directly transcribe human speech into text, have received greater attention. In this thesis, a multi-stream framework is presented based on joint Connectionist Temporal Classification/Attention (CTC/ATT) E2E model, where parallel streams are represented by separate encoders. On top of the regular attention networks, a secondary stream-fusion network is to steer the decoder toward the most informative streams. Two representative frameworks are proposed, which are Multi-Encoder Multi-Array (MEM-Array) and Multi-Encoder Multi-Resolution (MEM-Res), respectively.
The MEM-Array model aims at improving the far-field ASR robustness using multiple microphone arrays which are activated by separate encoders. With an increasing number of streams (encoders) requiring substantial memory and massive amounts of parallel data, a practical two-stage training strategy is desgnated to address these issues. Furthermore, a two-stage augmentation scheme is present to improve the robustness of the multi-stream model, where small amount of parallel data is sufficient to achieve competitive results. In MEM-Res, two heterogeneous encoders with different architectures, temporal resolutions and separate CTC networks work in parallel to extract complementary information from same acoustics. Compared with the best single-stream performance, both models have achieved substantial improvement, which also outperform various conventional fusion strategies.
While proposed framework optimizes information in multi-stream scenarios, this thesis also studies the Performance Monitoring (PM) measures to predict if recognition result of an end-to-end model is reliable, without growth-truth knowledge. Four different PM techniques are investigated, suggesting that PM measures on attention distributions and decoder posteriors are well-correlated with true performances.
Hynek Hermansky, Department of Electrical and Computer Engineering
Shinji Watanabe, Department of Electrical and Computer Engineering
Najim Dehak, Department of Electrical and Computer Engineering
Gregory Sell, JHU Human Language Technology Center of Excellence
Title: Medical Image Modality Synthesis and Resolution Enhancement Based on Machine Learning Techniques
Abstract: To achieve satisfactory performance from automatic medical image analysis algorithms such as registration or segmentation, medical imaging data with the desired modality/contrast and high isotropic resolution are preferred, yet they are not always available. We addressed this problem in this thesis using 1) image modality synthesis and 2) resolution enhancement.
The first contribution of this thesis is computed tomography (CT)-to-magnetic resonance imaging (MRI) image synthesis method, which was developed to provide MRI when CT is the only modality that is acquired. The main challenges are that CT has poor contrast as well as high noise in soft tissues and that the CT-to-MR mapping is highly nonlinear. To overcome these challenges, we developed a convolutional neural network (CNN) which is a modified U-net. With this deep network for synthesis, we developed the first segmentation method that provides detailed grey matter anatomical labels on CT neuroimages using synthetic MRI.
The second contribution is a method for resolution enhancement for a common type of acquisition in clinical and research practice, one in which there is high resolution (HR) in the in-plane directions and low resolution (LR) in the through-plane direction. The challenge of improving the through-plane resolution for such acquisitions is that the state-of-art convolutional neural network (CNN)-based super-resolution methods are sometimes not applicable due to lack of external LR/HR paired training data. To address this challenge, we developed a self super-resolution algorithm called SMORE and its iterative version called iSMORE, which are CNN-based yet do not require LR/HRpaired training data other than the subject image itself. SMORE/iSMOREcreate training data from the HR in-plane slices of the subject image itself, then train and apply CNNs to through-plane slices to improve spatial resolution and remove aliasing. In this thesis, we perform SMORE/iSMORE on multiple simulated and real data sets to demonstrate their accuracy and generalizability. Also, SMORE as a preprocessing step is shown to improve segmentation accuracy.
In summary, CT-to-MR synthesis, SMORE, and iSMORE were demonstrated in this thesis to be effective preprocessing algorithms for visual quality and other automatic medical image analysis such as registration or segmentation.
Jerry Prince, Department of Electrical and Computer Engineering
John Goutsias, Department of Electrical and Computer Engineering
Trac Tran, Department of Electrical and Computer Engineering
Title: Single Image Based Crowd Counting Using Deep Learning
Abstract: Estimating count and density maps from crowd images has a wide range of applications such as video surveillance, traffic monitoring, public safety and urban planning. In addition, techniques developed for crowd counting can be applied to related tasks in other fields of study such as cell microscopy, vehicle counting and environmental survey. The task of crowd counting and density map estimation from a single image is a difficult problem since it suffers from multiple issues like occlusions, perspective changes, background clutter, non-uniform density, intra-scene and inter-scene variations in scale and perspective. These issues are further exacerbated in highly congested scenes. In order to overcome these challenges, we propose a variety of different deep learning architectures that specifically incorporate various aspects such as global/local context information, attention mechanisms, specialized iterative and multi-level multi-pathway fusion schemes for combining information from multiple layers in a deep network. Through extensive experimentations and evaluations on several crowd counting datasets, we demonstrate that the proposed networks achieve significant improvements over existing approaches.
We also recognize the need for large amounts of data for training the deep networks and their inability to generalize to new scenes and distributions. To overcome this challenge, we propose novel semi-supervised and weakly-supervised crowd counting techniques that effectively leverage large amounts of unlabeled/weakly-labeled data. In addition to developing techniques with ability to learn from limited labeled data, we also introduce a new large-scale crowd counting dataset which can be used to train considerably larger networks. The proposed data consists of 4,372 high resolution images with 1.51 million annotations. We made explicit efforts to ensure that the images are collected under a variety of diverse scenarios and environmental conditions. The dataset provides a richer set of annotations like dots, approximate bounding boxes, blur levels, etc.
Title: Deep Learning Based Methods for Ultrasound Image Segmentation and Magnetic Resonance Image Reconstruction
Abstract: In recent years, deep learning (DL) algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. It has shown promising performances in many medical image analysis (MIA) problems, including classification, segmentation and reconstruction. However, the inherent difference between natural images and medical images (Ultrasound, MRI etc.) have hinder the performance of such DL-based method that originally designed for natural images. Another obstacle for DL-based MIA comes the availability of large-scale training dataset as it have shown that large and diverse dataset can effectively improve the robustness and generalization ability of DL networks.
In this thesis, we develop various deep learning-based approaches to address two medical image analysis problems. In the first problem, we focus on computer assisted orthopedic surgery (CAOS) applications that use ultrasound as intra-operative imaging modality. This problem requires an automatic and real-time algorithm to detect and segment bone surfaces and shadows in order to provide guidance for the orthopedic surgeon to a standardized diagnostic viewing plane with minimal artifacts. Due to the limitation of relatively small datasets and image differences from multiple ultrasound machines, we develop DL-based frameworks that leverage a local phase filtering technique and integrate it into the DL framework, thus improving the robustness.
Finally, we propose a fast and accurate Magnetic Resonance Imaging (MRI) image reconstruction framework using a novel Convolutional Recurrent Neural Network (CRNN). Extensive experiments and evaluation on knee and brain datasets have shown its outstanding results compared to the traditional compressed sensing and other DL-based methods. Furthermore, we extend this method to enable multi sequence-reconstruction where T2-weighted MRI image can provide guidance and improvement to the reconstruction of amid proton transfer-weighted
Carlos Castillo, Department of Electrical and Computer Engineering
Shanshan Jiang, Department of Radiology and Radiological Science
Ilker Hacihaliloglu, Department of Biomedical Engineering (Rutgers University)
Title: Unsupervised Domain Adaptation for Speaker Verification in the Wild
Abstract: Performance of automatic speaker verification (ASV) systems is very sensitive to mismatch between training (source) and testing (target) domains. The best way to address domain mismatch is to perform matched condition training – gather sufficient labeled samples from the target domain and use them in training. However, in many cases this is too expensive or impractical. Usually, gaining access to unlabeled target domain data, e.g., from open source online media, and labeled data from other domains is more feasible. This work focuses on making ASV systems robust to uncontrolled (‘wild’) conditions, with the help of some unlabeled data acquired from such conditions.
Given acoustic features from both domains, we propose learning a mapping function – a deep convolutional neural network (CNN) with an encoder-decoder architecture – between features of both the domains. We explore training the network in two different scenarios: training on paired speech samples from both domains and training on unpaired data. In the former case, where the paired data is usually obtained via simulation, the CNN is treated as a non-linear regression function and is trained to minimize L2 loss between original and predicted features from target domain. Though effective, we provide empirical evidence that this approach introduces distortions that affect verification performance. To address this, we explore training the CNN using adversarial loss (along with L2), which makes the predicted features indistinguishable from the original ones, and thus, improve verification performance.
The above framework, though effective, cannot be used to train the network on unpaired data obtained by independently sampling speech from both domains. In this case, we first train a CNN using adversarial loss to map features from source to target. We, then, map the predicted features back to the source domain using an auxiliary network, and minimize a cycle-consistency loss between the original and reconstructed source features.
To prevent the CNN from over-fitting when trained on limited amounts of data, we present a simple regularizing technique. Our unsupervised adaptation approach using feature mapping, also complements its supervised counterpart, where adaptation is done using labeled data from both domains. We focus on three domain mismatch scenarios: (1) sampling frequency mismatch between the domains, (2) channel mismatch, and (3) robustness to far-field and noisy speech acquired from wild conditions.
Title: Statistical Signal Processing Methods for Epigenetic Landscape Analysis
Abstract: Since the DNA structure was discovered in 1953, a great deal of effort has been put into studying this molecule in detail. We now know DNA comprises an organism’s genetic makeup and constitutes a blueprint for life. The study of DNA has dramatically increased our knowledge about cell function and evolution and has led to remarkable discoveries in biology and medicine.
Just as DNA is replicated during cell division, several chemical marks are also passed onto progeny during this process. Epigenetics studies these marks and represents a fascinating research area given their crucial role. Among all known epigenetic marks, 5mc DNA methylation is probably one of the most important ones given its well-established association with various biological processes, such as development and aging, and disease, such as cancer. The work in this dissertation focuses primarily on this epigenetic mark, although it has the potential to be applied to other heritable marks.
In the 1940s, Waddington introduced the term epigenetic landscape to conceptually describe cell pluripotency and differentiation. This concept lived in the abstract plane until Jenkinson et al. 2017 & 2018 estimated actual epigenetic landscapes from WGBS data, and the work led to startling results with biological implications in development and disease. Here, we introduce an array of novel computational methods that draw from that work. First, we present CPEL, a method that uses a variant of the original landscape proposed by Jenkinson et al., which, together with a new hypothesis testing framework, allows for the detection of DNA methylation imbalances between homologous chromosomes. Then, we present CpelTdm, a method that builds upon CPEL to perform differential methylation analysis between groups of samples using targeted bisulfite sequencing data. Finally, we extend the original probabilistic model proposed by Jenkinson et al. to estimate methylation landscapes and perform differential analysis from nanopore data.
Overall, this work addresses immediate needs in the study of DNA methylation. The methods presented here can lead to a better characterization of this critical epigenetic mark and enable biological discoveries with implications for diagnosing and treating complex human diseases.
Note: This is a virtual presentation. Check this page at a later date for the Zoom link to where the presentation will be taking place.
Title: Machine Learning for Beamforming in Audio, Ultrasound, and Radar
Abstract: Multi-sensor signal processing plays a crucial role in the working of several everyday technologies, from correctly understanding speech on smart home devices to ensuring aircraft fly safely. A specific type of multi-sensor signal processing called beamforming forms a central part of this thesis. Beamforming works by combining the information from several spatially distributed sensors to directionally filter information, boosting the signal from a certain direction but suppressing others. The idea of beamforming is key to the domains of audio, ultrasound, and radar.
Machine learning is the other central part of this thesis. Machine learning, and especially its sub-field of deep learning, has enabled breakneck progress in tackling several problems that were previously thought intractable. Today, machine learning powers much of the cutting edge systems we see on the internet for image classification, speech recognition, language translation, and more.
In this dissertation, we look at beamforming pipelines in audio, ultrasound, and radar from a machine learning lens and endeavor to improve different parts of the pipelines using ideas from machine learning. We start off in the audio domain and derive a machine learning inspired beamformer to tackle the problem of ensuring the audio captured by a camera matches its visual content, a problem we term audiovisual zooming. Staying in the audio domain, we then demonstrate how deep learning can be used to improve the perceptual qualities of speech by denoising speech clipping, codec distortions, and gaps in speech.
Transitioning to the ultrasound domain, we improve the performance of short-lag spatial coherence ultrasound imaging by exploiting the differences in tissue texture at each short lag value by applying robust principal component analysis. Next, we use deep learning as an alternative to beamforming in ultrasound and improve the information extraction pipeline by simultaneously generating both a segmentation map and B-mode image of high quality directly from raw received ultrasound data.
Finally, we move to the radar domain and study how deep learning can be used to improve signal quality in ultra-wideband synthetic aperture radar by suppressing radio frequency interference, random spectral gaps, and contiguous block spectral gaps. By training and applying the networks on raw single-aperture data prior to beamforming, it can work with myriad sensor geometries and different beamforming equations, a crucial requirement in synthetic aperture radar.