Calendar

Oct
15
Thu
Thesis Proposal: Niharika Shimona D’Souza
Oct 15 @ 3:00 pm
Thesis Proposal: Niharika Shimona D'Souza

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Mapping Brain Connectivity to Behavior: from Network Optimization Frameworks to Deep-Generative Hybrid Models

Abstract: Autism Spectrum Disorder (ASD) is a complex neurodevelopmental disorder characterized by multiple impairments and levels of disability that vary widely across the ASD spectrum. Currently, the most common methods of quantifying symptom severity are almost solely based on a trained clinician’s evaluation. Recently, neuroimaging techniques such as resting state functional MRI (rs-fMRI) and Diffusion Tensor Imaging (DTI) have been gaining popularity for studying aberrant brain connectivity in ASD. My thesis aims at linking the symptomatic characterization of ASD with the functional and structural organization of a typical patient’s brain as given by rs-fMRI and DTI respectively. My talk is organised into two main parts, as follows:

Network Optimization Models for rs-fMRI connectomics and clinical severity:
Analysis of a multi-subject rs-fMRI imaging study often begins at the group level, for example, estimating group-averaged functional connectivity across all subjects. The failure of data-driven machine learning techniques such as PCA, k-PCA, SVMs etc. are largely attributed to their failure at capturing both the group structure and the individual patient variability, due to which they fail to generalize to unseen patients. To overcome these limitations, we developed a matrix factorization technique to represent the rs-fMRI correlation matrices by decomposing them into a sparse set of representative subnetworks modeled by rank one outer products. The subnetworks are combined using patient-specific non-negative coefficients. The network representations are fixed across the entire group, however, the strength of the subnetworks can vary across individuals. We significantly extend prior work in the area by using these very network coefficients to simultaneously predict behavioral measures via techniques ranging from simple linear regression models to parametric kernel methods, to Artificial Neural Networks (ANNs). The main novelty of the algorithms lies in jointly optimizing for the regression/ANN weights in conjunction with the rs-fMRI matrix factors. By leveraging techniques from convex and non-convex optimization, these frameworks significantly outperform several state-of-the art machine learning, graph theoretic and deep learning baselines at generalization to unseen patients.

Deep-Generative Hybrid Frameworks for Integrating Multimodal and Dynamic Connectivity with Behavior:
There is now growing evidence that functional connectivity between regions is a dynamically process evolving over a static anatomical connectivity profile, and that modeling this evolution is crucial to understanding ASD. Thus, we propose an integrated deep-generative framework, that jointly models complementary information from resting-state functional MRI (rs-fMRI) connectivity and diffusion tensor imaging (DTI) tractography to extract predictive biomarkers of a disease. The generative part of our framework is a structurally-regularized Dynamic Dictionary Learning (sr-DDL) model that decomposes the dynamic rs-fMRI correlation matrices into a collection of shared basis networks and time varying patient-specific loadings. This matrix factorization is guided by the DTI tractography matrices to learn anatomically informed connectivity profiles. The deep part of our framework is an LSTM-ANN block, which models the temporal evolution of the patient sr-DDL loadings to predict multidimensional clinical severity. Once again, our coupled optimization procedure collectively estimates the basis networks, the patient-specific dynamic loadings, and the neural network weights. Our hybrid model outperforms state-of-the-art baselines in a cross validated setting and extracts interpretable multimodal neural signatures of brain dysfunction in ASD.

In recent years, graph neural networks have shown great promise in brain connectivity research due to their ability to underscore subtle interactions between communicating brain regions while exploiting the underlying hierarchy of brain organization. To conclude, I will present some ongoing explorations based on end-to-end graph convolutional networks that directly model the evolution of the rs-fMRI signals/connectivity patterns over the underlying anatomical DTI graphs.

Committee Members

Archana Venkataraman, Department of Electrical and Computer Engineering

Rene Vidal, Department of Biomedical Engineering

Carey E. Priebe, Department of Applied Mathematics & Statistics

Stewart Mostofsky, Director of Center for Neurodevelopmental and Imaging Research, Kennedy Krieger Institute

Kilian Pohl, Program Director, Image Analysis, Center for Health Sciences,and Biomedical Computing, SRI International; Associate Professor of Psychiatry and Behavioral Sciences, Stanford University

 

Oct
16
Fri
Dissertation Defense: Golnoosh Kamali
Oct 16 @ 12:00 pm
Dissertation Defense: Golnoosh Kamali

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Transfer function models of cortico-cortical evoked potentials for the localization of seizures in medically refractory epilepsy patients

Abstract: Surgical resection of the seizure onset zone (SOZ) could potentially lead to seizure-freedom in medically refractory epilepsy (MRE) patients. However, localizing the SOZ is a time consuming, subjective process involving visual inspection of intracranial electroencephalographic (iEEG) recordings captured during invasive passive patient monitoring. Cortical stimulation is currently performed on patients undergoing invasive EEG monitoring for the main purpose of mapping functional brain networks such as language and motor networks. We hypothesized that the evoked responses from single pulse electrical stimulation (SPES) can be used to localize the SOZ as they may express the natural frequencies and connectivity of the iEEG network. We constructed patient specific transfer function models from evoked responses recorded from 22 MRE patients that underwent SPES evaluation and iEEG monitoring. We then computed the frequency and connectivity dependent “peak gain” of the system, as measured by the H_∞ norm from systems theory, and the corresponding “floor gain,” which is the gain at which the H_∞ dipped 3dB below the DC gain. In cases for which clinicians had high confidence in localizing the SOZ, the highest peak gain transfer functions with the smallest “floor gains” corresponded to when the clinically annotated SOZ and early spread regions were stimulated. In more complex cases, there was a large spread of the peak gains when the clinically annotated SOZ was stimulated. Interestingly for patients who had successful surgeries, our ratio of peak-to-floor (PF) gains, agreed with clinical localization, no matter the complexity of the case. For patients with failed surgeries, the PF ratio did not match clinical annotations. Our findings suggest that transfer function gains and their corresponding frequency responses computed from SPES evoked responses may improve SOZ localization and thus surgical outcomes.

Committee Members

Sridevi V. Sarma, Department of Biomedical Engineering

Joon Y. Kang, Department of Neurology

Archana Venkataraman, Department of Electrical and Computer Engineering

Nathan E. Crone, Department of Neurology

Oct
23
Fri
Dissertation Defense: Gaspar Tognetti
Oct 23 @ 2:00 pm
Dissertation Defense: Gaspar Tognetti

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Circuits and Architecture for Bio-Inspired AI Accelerators

Abstract: Technological advances in microelectronics envisioned through Moore’s law have led to more powerful processors that can handle complex and computationally intensive tasks. Nonetheless, these advancements through technology scaling have come at an unfavorable cost of significantly larger power consumption, which has posed challenges for data processing centers and computers at the scale. Moreover, with the emergence of mobile computing platforms constrained by power and bandwidth for distributed computing, the necessity for more energy-efficient scalable local processing has become more significant.

Unconventional Compute-in-Memory (CiM) architectures such as the analog winner-takes-all associative-memory, the Charge-Injection Device (CID) processor, and analog-array processing have been proposed as alternatives. Unconventional charge-based computation has been employed for neural network accelerators in the past, where impressive energy efficiency per operation has been attained in 1-bit vector-vector multiplications (VMMs), and in recent work, multi-bit vector-vector multiplications. A similar approach was used in earlier work, where a charge-injection device array was utilized to store binary coded vectors, and computations were done using binary or multi-bit inputs in the charge domain; computation is carried out by counting quanta of charge at the thermal noise limit, using packets of about 1000 electrons. These systems are neither analog nor digital in the traditional sense but employ mixed-signal circuits to count the packets of charge and hence we call them Quasi-Digital. By amortizing the energy costs of the mixed-signal encoding/decoding over compute-vectors with a large number of elements, high energy efficiencies can be achieved.

In this dissertation, I present a design framework for AI accelerators using scalable compute-in-memory architectures. On the device level, two primitive elements are designed and characterized as target storage technologies: (i) a multilevel non-volatile computational cell and (ii) a pseudo Dynamic Random-Access Memory (pseudo-DRAM) computational bit-cell. Experimental results in deep-submicron CMOS processes demonstrate successful operation; subsequently, behavioral models were developed and employed in large-scale system simulations and emulations. Thereafter, at the level of circuit description, compute-in-memory crossbars and mixed-signal circuits were designed, allowing seamless connectivity to digital controllers. At the level of data representation, both binary and stochastic-unary coding are used to compute Vector-Vector Multiplications (VMMs) at the array level, demonstrating successful experimental results and providing insight into the integration requirements that larger systems may demand. Finally, on the architectural level, two AI accelerator architectures for data center processing and edge computing are discussed. Both designs are scalable multi-core Systems-on-Chip (SoCs), where vector-processor arrays are tiled on a 2-layer Network-on-Chip (NoC), enabling neighbor communication and flexible compute vs. memory trade-off. General purpose Arm/RISCV co-processors provide adequate bootstrapping and system-housekeeping and a high-speed interface fabric facilitates Input/Output to main memory.

Committee Members

Andreas Andreou, Department of Electrical and Computer Engineering

Ralph Etienne-Cummings, Department of Electrical and Computer Engineering

Philippe Pouliquen, Department of Electrical and Computer Engineering

Dissertation Defense: Ruizhi Li
Oct 23 @ 2:00 pm
Dissertation Defense: Ruizhi Li

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: An Efficient and Robust Multi-Stream Framework for End-to-End Speech Recognition

Abstract: In the voice-enabled domestic or meeting environments, distributed microphone arrays aim to process distant-speech interaction into text with high accuracy. However, with dynamic corruption of noises and reverberations or human movement present, there is no guarantee that any microphone array (stream) is constantly informative. In these cases, an appropriate strategy to dynamically fuse streams or select the most informative array is necessary.

The multi-stream paradigm in Automatic Speech Recognition (ASR) considers scenarios where parallel streams carry diverse or complementary task-related knowledge. Such streams could be defined as microphone arrays, frequency bands, various modalities or etc. Hence, a robust stream fusion is crucial to emphasize on more informative streams than corrupted ones, specially under unseen conditions. This thesis focuses on improving the performance and robustness of speech recognition in multi-stream scenarios.

In recent years, with the increasing use of Deep Neural Networks (DNNs) in ASR, End-to-End (E2E) approaches, which directly transcribe human speech into text, have received greater attention. In this thesis, a multi-stream framework is presented based on joint Connectionist Temporal Classification/Attention (CTC/ATT) E2E model, where parallel streams are represented by separate encoders. On top of the regular attention networks, a secondary stream-fusion network is to steer the decoder toward the most informative streams. Two representative frameworks are proposed, which are Multi-Encoder Multi-Array (MEM-Array) and Multi-Encoder Multi-Resolution (MEM-Res), respectively.

The MEM-Array model aims at improving the far-field ASR robustness using multiple microphone arrays which are activated by separate encoders. With an increasing number of streams (encoders) requiring substantial memory and massive amounts of parallel data, a practical two-stage training strategy is desgnated to address these issues. Furthermore, a two-stage augmentation scheme is present to improve the robustness of the multi-stream model, where small amount of parallel data is sufficient to achieve competitive results. In MEM-Res, two heterogeneous encoders with different architectures, temporal resolutions and separate CTC networks work in parallel to extract complementary information from same acoustics. Compared with the best single-stream performance, both models have achieved substantial improvement, which also outperform various conventional fusion strategies.

While proposed framework optimizes information in multi-stream scenarios, this thesis also studies the Performance Monitoring (PM) measures to predict if recognition result of an end-to-end model is reliable, without growth-truth knowledge. Four different PM techniques are investigated, suggesting that PM measures on attention distributions and decoder posteriors are well-correlated with true performances.

Committee Members

Hynek Hermansky, Department of Electrical and Computer Engineering

Shinji Watanabe, Department of Electrical and Computer Engineering

Najim Dehak, Department of Electrical and Computer Engineering

Gregory Sell, JHU Human Language Technology Center of Excellence

Nov
5
Thu
Thesis Proposal: Jeff Craley
Nov 5 @ 3:00 pm
Thesis Proposal: Jeff Craley

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place. 

Title: Localizing Seizure Foci with Deep Neural Networks and Graphical Models

Abstract: Worldwide estimates of the prevalence of epilepsy range from 1-3% of the total population, making it one of the most common neurological disorders. With its wide prevalence and dramatic effects on quality of life, epilepsy represents a large and ongoing public health challenge. Critical to the treatment of focal epilepsy is the localization of the seizure onset zone. The seizure onset zone is defined as the region of the cortex responsible for the generation of seizures. In the clinic, scalp electroencephalography (EEG) recording is the first modality used to localize the seizure onset zone.

My work focuses on developing machine learning techniques to localize this zone from these recordings. Using Bayesian techniques, I will present graphical models designed to captures the observed spreading of seizures in clinical EEG recordings. These models directly encode clinically observed seizure spreading phenomena to capture seizure onset and evolution. Using neural networks, the raw EEG signal is evaluated is evaluated for seizure activity. In this talk I will propose extensions to these techniques employing semi-supervised learning and architectural improvements for training sophisticated neural networks designed to analyze scalp EEG signals. In addition, I will propose modeling improvements to current graphical models for evaluating the confidence of localization results.

Committee Members

Archana Venkataraman (Department of Electrical and Computer Engineering)

Sri Sarma (Department of Biomedical Engineering)

Rene Vidal (Department of Biomedical Engineering)

Richard Leahy (Department of Electrical Engineering Systems – University of Southern California)

Thesis Proposal: Yan Jiang
Nov 5 @ 3:00 pm
Thesis Proposal: Yan Jiang

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Leveraging Inverter-Based Frequency Control in Low-Inertia Power Systems

Abstract: The shift from conventional synchronous generation to renewable converter-interfaced sources has led to a noticeable degradation of power system frequency dynamics. Fortunately, recent technology advancements in power electronics and electric storage facilitate the potential to enable higher renewable energy penetration by means of inverter-interfaced storage units. With proper control approaches, fast inverter dynamics can ensure the rapid response of storage units to mitigate degradation. A straightforward choice is to emulate the damping effect and/or inertial response of synchronous generators through droop control or virtual inertia, yet they do not necessarily fully exploit the benefits of inverter-interfaced storage units. For instance, droop control sacrifices steady-state effort share to improve dynamic performance, while virtual inertia amplifies frequency measurement noise. This work thus seeks to challenge this naive choice of mimicking synchronous generator characteristics and instead advocate for a principled control design perspective. To achieve this goal, we build our analysis upon quantifying power network dynamic performance using $\mathcal L_2$ and $\mathcal L_\infty$ norms so as to perform a systematic study evaluating the effect of different control approaches on both frequency response metrics and storage economic metrics. The main contributions of this project will be as follows: (i) We will propose a novel dynamic droop control approach, for grid following inverters, that can be tuned to achieve low noise sensitivity, fast synchronization, and Nadir elimination, without affecting the steady-state performance; (ii) We will propose a new frequency shaping control approach that allows to trade-off between the rate of change of frequency (RoCoF) and storage conrol effort; (iii) We will further extend the proposed solutions to operate in a grid-forming setting that is suitable for a non-stiff power grid where the amplitude and frequency of grid voltage is not well-regulated.

Committee Members

Enrique Mallada (Department of Electrical & Computer Engineering)

Pablo A. Iglesias (Department of Electrical & Computer Engineering)

Dennice F. Gayme (Department of Mechanical Engineering)

Nov
12
Thu
WSE Trailblazer Seminar Series: Charles Johnson-Bey PhD, JHU ECE ‘89
Nov 12 @ 3:00 pm
WSE Trailblazer Seminar Series: Charles Johnson-Bey PhD, JHU ECE ‘89

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Think Bigger: Empower Yourself to Change the World

Abstract: During this talk, I will share some of my experiences and ultimately challenge the audience to place their research into a greater context. We must actively pursue ways to innovate by expanding our thinking about how we positively impact society. I will explore how a kid from East Baltimore grew up and developed the tools, skills, and abilities to thrive in a career where he currently leverages the best technology and expertise from around the globe in order to translate ideas into solutions that solve some of the world’s most complex problems.

Bio: Dr. Charles Johnson-Bey is a Senior Vice President at Booz Allen Hamilton. He is a global leader in technology innovation and uniquely leverages the intersection of technology, strategy, and business to create & capture value, lead change and drive execution. Dr. Johnson-Bey has more than 25 years of engineering experience spanning cyber resilience, signal processing, system architecture, prototyping, and hardware. Prior to joining Booz Allen, he was a research engineer at Motorola Corporate Research Labs and Corning Incorporated and taught electrical engineering at Morgan State University. He also worked at Lockheed Martin Corporation for 17 years, where he galvanized the company’s cyber resources and led research and development activities with organizations including Oak Ridge National Laboratory, Microsoft Research, and the GE Global Research Center. He serves on the Whiting School of Engineering Advisory Board and the Electrical and Computer Engineering Advisory Committee, both at Johns Hopkins University. He is also on the Cybersecurity Institute Advisory Board for the Community College of Baltimore County. Dr. Johnson-Bey received a B.S. in Electrical and Computer Engineering from Johns Hopkins University and both an M.S. and Ph.D. in Electrical Engineering from the University of Delaware.

This event is co-hosted by the ECE Department and the Whiting School of Engineering.

Nov
19
Thu
Thesis Proposal: Puyang Wang
Nov 19 @ 3:00 pm
Thesis Proposal: Puyang Wang

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Accelerating Magnetic Resonance Imaging using Convolutional Recurrent Neural Networks

Abstract: Fast and accurate MRI image reconstruction from undersampled data is critically important in clinical practice. Compressed sensing based methods are widely used in image reconstruction but the speed is slow due to the iterative algorithms. Deep learning based methods have shown promising advances in recent years. However, recovering the fine details from highly undersampled data is still challenging. Moreover, Current protocol of Amide Proton Transfer-weighted (APTw) imaging commonly starts with the acquisition of high-resolution T2-weighted (T2w) images followed by APTw imaging at particular geometry and locations (i.e. slice) determined by the acquired T2w images. Although many advanced MRI reconstruction methods have been proposed to accelerate MRI, existing methods for APTw MRI lack the capability of taking advantage of structural information in the acquired T2w images for reconstruction. In this work, we introduce a novel deep learning-based method with Convolutional Recurrent Neural Networks (CRNN) to reconstruct the image from multiple scales. Finally, we explore the use of the proposed Recurrent Feature Sharing (RFS) reconstruction module to utilize intermediate features extracted from the matched T2w image by CRNN so that the missing structural information can be incorporated into the undersampled APT raw image thus effectively improving the image quality of the reconstructed APTw image.

Committee Members

Vishal M. Patel, Department of Electrical and Computer Engineering

Rama Chellappa, Department of Electrical and Computer Engineering

Shanshan Jiang, Department of Radiology and Radiological Science

Dec
3
Thu
Thesis Proposal: Xing Di
Dec 3 @ 3:00 pm
Thesis Proposal: Xing Di

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Deep Learning-based Heterogeneous Face Recognition

Abstract: Face Recognition (FR) is one of the most widely studied problems in computer vision and biometrics research communities due to its applications in authentication, surveillance, and security.  Various methods have been developed over the last two decades that specifically attempt to address the challenges such as aging, occlusion, disguise, variations in pose, expression, and illumination. In particular,  convolutional neural network (CNN) based  FR methods have gained significant traction in recent years.  Deep CNN-based methods have achieved impressive performances on the current FR benchmarks.  Despite the success of CNN-based methods in addressing various challenges in FR, they are fundamentally limited to recognizing face images that are collected near-infrared spectrum. In many practical scenarios such as surveillance in low-light conditions, one has to detect and recognize faces that are captured using thermal cameras.  However, the performance of many deep learning-based methods degrades significantly when they are presented with thermal face images.

Thermal-to-visible face verification is a challenging problem due to the large domain discrepancy between the modalities. Existing approaches either attempt to synthesize visible faces from thermal faces or extract robust features from these modalities for cross-modal matching. We present a work in which we use attributes extracted from visible images to synthesize the attribute-preserved visible images from thermal imagery for cross-modal matching. A pre-trained VGG-Face network is used to extract the attributes from the visible image. Then, a novel multi-scale generator is proposed to synthesize the visible image from the thermal image guided by the extracted attributes. Finally, a pre-trained VGG-Face network is leveraged to extract features from the synthesized image and the input visible image for verification.

Committee Members

Rama Chellappa, Department of Electrical and Computer Engineering

Carlos Castillo, Department of Electrical and Computer Engineering

Vishal Patel, Department of Electrical and Computer Engineering

Dec
10
Thu
Thesis Proposal: Yufan He
Dec 10 @ 3:00 pm
Thesis Proposal: Yufan He

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Retina OCT image analysis using deep learning methods

Abstract: Optical coherence topography (OCT) is a non-invasive imaging modality which uses low-coherence light waves to take cross-sectional images of optical scattering media (e.g., the human retina). OCT has been widely used in diagnosing retinal and neural diseases by imaging the human retina. The thickness of retina layers are important biomarkers for neurological diseases like multiple sclerosis (MS). The peripapillary retinal nerve fiber layer (pRNFL) and ganglion cell plus inner plexiform layer (GCIP) thickness can be used to assess global disease progression of MS patient. Automated OCT image analysis tools are critical for quantitatively monitoring disease progression and explore biomarkers. With the development of more powerful computational resources, deep learning based methods have achieved much better performance in accuracy, speed, and algorithm flexibility for many image analysis tasks. However, these emerging deep learning methods are not satisfactory when directly applied to OCT image analysis tasks like retinal layer segmentation if not using task specific knowledge.

This thesis aims to develop a set of novel deep learning based methods for retinal OCT image analysis. Specifically, we are focusing on retinal layer segmentation from macular OCT images. Image segmentation is the process of classifying each pixel in a digital image into different classes. Deep learning methods are powerful classifiers in pixel classification, but it is hard to incorporate explicit rules. For retinal layer OCT images, pixels belonging to different layer classes must satisfy the anatomical hierarchy (topology): pixels of the upper layers should have no overlap or gap with pixels of layers beneath it. This topological criterion is usually achieved by sophisticated post-processing methods, which current deep learning method cannot guarantee. To solve this problem, we aim to:

  • Develop an end-to-end deep learning segmentation method with guaranteed layer segmentation topology for retinal OCT images.

The deep learning model’s performance will degrade badly when test data is generated differently from the training data; thus, we aim to

  • Develop domain adaptation methods to increase robustness of the deep learning methods to OCT images generated differently from network training data.

The deep learning pipeline will be used to analyze longitudinal OCT images for MS patients, where the subtle changes due to the MS should be captured; thus, we aim to:

  • Develop a longitudinal OCT image analysis pipeline for consistent longitudinal segmentation with deep learning.

 

Back to top