Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.
Title: Towards building a clinically-inspired ultrasound innovation hub: Design, Development and Clinical Validation of novel Ultrasound hardware for Imaging, Therapeutics, Sensing and other applications.
Abstract: Ultrasound is a relatively established modality with a number of exciting, yet not fully explored applications, ranging from imaging and image-guided navigation, to tumor ablation, neuro-modulation, piezoelectric surgery, and drug delivery. In this talk, Dr. Manbachi will be discussing some of his ongoing projects aiming to address low-frequency bone sonography, minimally invasive ablation of neuro-oncology and implantable sensors for spinal cord blood flow measurements.
Bio: Dr. Manbachi is an Assistant Professor of Neurosurgery and Biomedical Engineering at Johns Hopkins University. His research interests include applications of sound and ultrasound to various neurosurgical procedures. These applications include imaging the spine and brain, detection of foreign body objects, remote ablation of brain tumors, monitoring of blood flow and tissue perfusion, as well as other upcoming interesting applications such as neuromodulation and drug delivery. His teaching activities mentorship with BME Design Teams as well as close collaboration with clinical experts in Surgery and Radiology at Johns Hopkins.
His previous work included the development of ultrasound-guided spine surgery. He obtained his PhD from the University of Toronto, under the supervision of Dr. Richard S.C. Cobbold. Prior to joining Johns Hopkins, he was a postdoctoral fellow at Harvard-MIT Division of Health Sciences and Technology (2015-16) and the founder and CEO of Spinesonics Medical (2012–2015), a spinoff from his doctoral studies.
Amir is an author on >25 peer-reviewed journal articles, > 30 conference proceedings, 10 invention disclosures / patent applications and a book entitled “Towards Ultrasound-guided Spinal Fusion Surgery.” He has mentored 150+ students, has so far been raised $1.1M of funding and his interdisciplinary research has been recognized by a number of awards, including University of Toronto’s 2015 Inventor of Year award, Ontario Brain Institute 2013 fellowship, Maryland Innovation Initiative and Cohen Translational Funding.
Dr. Manbachi has extensive teaching experience, particularly in the field of engineering design, medical imaging and entrepreneurship (both at Hopkins and Toronto), for which he received the University of Toronto’s Teaching Excellence award in 2014, as well as Johns Hopkins University career centre’s award nomination for students’ “Career Champion” (2018) and finally Johns Hopkins University Whiting School of Engineering’s Robert B. Pond Sr. Excellence in Teaching Excellence Award (2018).
Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.
Title: 5G Security – Opportunities and Challenges
Abstract: Software Defined Networking (SDN) and Network Function Virtualization (NFV) are the key pillars of future networks, including 5G and beyond that promise to support emerging applications such as enhanced mobile broadband, ultra-low latency, massive sensing type applications while providing the resiliency in the network. Service providers and other vertical industries (e.g., Connected Cars, IOT, eHealth) can leverage SDN/NFV to provide flexible and cost-effective service without compromising the end user quality of service (QoS). While NFV and SDN open up the door for flexible networks and rapid service creation, these also offer both security opportunities while also introducing additional challenges and complexities, in some cases. With the rapid proliferation of 4G and 5G networks, operators have now started the trial deployment of network function virtualization, especially with the introduction of various virtualized network elements in the access and core networks. While several standardization bodies (e.g., ETSI, 3GPP, NGMN, ATIS, IEEE) have started looking into the many security issues introduced by SDN/NFV, additional work is needed with larger security community including vendors, operators, universities, and regulators.
This talk will address evolution of cellular technologies towards 5G but will largely focus on various security challenges and opportunities introduced by SDN/NFV and 5G networks such as Hypervisor, Virtual Network Functions (VNFs), SDN controller, orchestrator, network slicing, cloud RAN, edge cloud, and security function virtualization. This talk will introduce a threat taxonomy for 5G security from an end-to-end system perspective, potential threats introduced by these enablers, and associated mitigation techniques. At the same time, some of the opportunities introduced by these pillars will also be discussed. This talk will also highlight some of the ongoing activities within various standards communities and will illustrate a few deployment use case scenarios for security including threat taxonomy for both operator and enterprise networks.
Bio: Ashutosh Dutta is currently senior scientist and 5G Chief Strategist at the Johns Hopkins University Applied Physics Laboratory (JHU/APL). He is also a JHU/APL Sabbatical Fellow and adjunct faculty at The Johns Hopkins University. Ashutosh also serves as the chair for Electrical and Computer Engineering Department of Engineering for Professional Program at Johns Hopkins University. His career, spanning more than 30 years, includes Director of Technology Security and Lead Member of Technical Staff at AT&T, CTO of Wireless for NIKSUN, Inc., Senior Scientist and Project Manager in Telcordia Research, Director of the Central Research Facility at Columbia University, adjunct faculty at NJIT, and Computer Engineer with TATA Motors. He has more than 100 conference, journal publications, and standards specifications, three book chapters, and 31 issued patents. Ashutosh is co-author of the book, titled, “Mobility Protocols and Handover Optimization: Design, Evaluation and Application” published by IEEE and John & Wiley.
As a Technical Leader in 5G and security, Ashutosh has been serving as the founding Co-Chair for the IEEE Future Networks Initiative that focuses on 5G standardization, education, publications, testbed, and roadmap activities. Ashutosh serves as IEEE Communications Society’s Distinguished Lecturer for 2017-2020 and as an ACM Distinguished Speaker (2020-2022) Ashutosh has served as the general Co-Chair for the premier IEEE 5G World Forums and has organized 65 5G World Summits around the world.
Ashutosh served as the chair for IEEE Princeton / Central Jersey Section, Industry Relation Chair for Region 1 and MGA, Pre-University Coordinator for IEEE MGA and vice chair of Education Society Chapter of PCJS. He co-founded the IEEE STEM conference (ISEC) and helped to implement EPICS (Engineering Projects in Community Service) projects in several high schools. Ashutosh has served as the general Co-Chair for the IEEE STEM conference for the last 10 years. Ashutosh served as the Director of Industry Outreach for IEEE Communications Society from 2014-2019. He was recipient of the prestigious 2009 IEEE MGA Leadership award and 2010 IEEE-USA professional leadership award. Ashutosh currently serves as Member-At-Large for IEEE Communications Society for 2020-2022.
Ashutosh obtained his BS in Electrical Engineering from NIT Rourkela, India; MS in Computer Science from NJIT; and Ph.D. in Electrical Engineering from Columbia University, New York under the supervision of Prof. Henning Schulzrinne. Ashutosh is a Fellow of IEEE and senior member of ACM.
Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.
Title: Single Image Based Crowd Counting Using Deep Learning
Abstract: Estimating count and density maps from crowd images has a wide range of applications such as video surveillance, traffic monitoring, public safety and urban planning. In addition, techniques developed for crowd counting can be applied to related tasks in other fields of study such as cell microscopy, vehicle counting and environmental survey. The task of crowd counting and density map estimation from a single image is a difficult problem since it suffers from multiple issues like occlusions, perspective changes, background clutter, non-uniform density, intra-scene and inter-scene variations in scale and perspective. These issues are further exacerbated in highly congested scenes. In order to overcome these challenges, we propose a variety of different deep learning architectures that specifically incorporate various aspects such as global/local context information, attention mechanisms, specialized iterative and multi-level multi-pathway fusion schemes for combining information from multiple layers in a deep network. Through extensive experimentations and evaluations on several crowd counting datasets, we demonstrate that the proposed networks achieve significant improvements over existing approaches.
We also recognize the need for large amounts of data for training the deep networks and their inability to generalize to new scenes and distributions. To overcome this challenge, we propose novel semi-supervised and weakly-supervised crowd counting techniques that effectively leverage large amounts of unlabeled/weakly-labeled data. In addition to developing techniques with ability to learn from limited labeled data, we also introduce a new large-scale crowd counting dataset which can be used to train considerably larger networks. The proposed data consists of 4,372 high resolution images with 1.51 million annotations. We made explicit efforts to ensure that the images are collected under a variety of diverse scenarios and environmental conditions. The dataset provides a richer set of annotations like dots, approximate bounding boxes, blur levels, etc.
Title: Deep Learning Based Methods for Ultrasound Image Segmentation and Magnetic Resonance Image Reconstruction
Abstract: In recent years, deep learning (DL) algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. It has shown promising performances in many medical image analysis (MIA) problems, including classification, segmentation and reconstruction. However, the inherent difference between natural images and medical images (Ultrasound, MRI etc.) have hinder the performance of such DL-based method that originally designed for natural images. Another obstacle for DL-based MIA comes the availability of large-scale training dataset as it have shown that large and diverse dataset can effectively improve the robustness and generalization ability of DL networks.
In this thesis, we develop various deep learning-based approaches to address two medical image analysis problems. In the first problem, we focus on computer assisted orthopedic surgery (CAOS) applications that use ultrasound as intra-operative imaging modality. This problem requires an automatic and real-time algorithm to detect and segment bone surfaces and shadows in order to provide guidance for the orthopedic surgeon to a standardized diagnostic viewing plane with minimal artifacts. Due to the limitation of relatively small datasets and image differences from multiple ultrasound machines, we develop DL-based frameworks that leverage a local phase filtering technique and integrate it into the DL framework, thus improving the robustness.
Finally, we propose a fast and accurate Magnetic Resonance Imaging (MRI) image reconstruction framework using a novel Convolutional Recurrent Neural Network (CRNN). Extensive experiments and evaluation on knee and brain datasets have shown its outstanding results compared to the traditional compressed sensing and other DL-based methods. Furthermore, we extend this method to enable multi sequence-reconstruction where T2-weighted MRI image can provide guidance and improvement to the reconstruction of amid proton transfer-weighted
Carlos Castillo, Department of Electrical and Computer Engineering
Shanshan Jiang, Department of Radiology and Radiological Science
Ilker Hacihaliloglu, Department of Biomedical Engineering (Rutgers University)
Title: Unsupervised Domain Adaptation for Speaker Verification in the Wild
Abstract: Performance of automatic speaker verification (ASV) systems is very sensitive to mismatch between training (source) and testing (target) domains. The best way to address domain mismatch is to perform matched condition training – gather sufficient labeled samples from the target domain and use them in training. However, in many cases this is too expensive or impractical. Usually, gaining access to unlabeled target domain data, e.g., from open source online media, and labeled data from other domains is more feasible. This work focuses on making ASV systems robust to uncontrolled (‘wild’) conditions, with the help of some unlabeled data acquired from such conditions.
Given acoustic features from both domains, we propose learning a mapping function – a deep convolutional neural network (CNN) with an encoder-decoder architecture – between features of both the domains. We explore training the network in two different scenarios: training on paired speech samples from both domains and training on unpaired data. In the former case, where the paired data is usually obtained via simulation, the CNN is treated as a non-linear regression function and is trained to minimize L2 loss between original and predicted features from target domain. Though effective, we provide empirical evidence that this approach introduces distortions that affect verification performance. To address this, we explore training the CNN using adversarial loss (along with L2), which makes the predicted features indistinguishable from the original ones, and thus, improve verification performance.
The above framework, though effective, cannot be used to train the network on unpaired data obtained by independently sampling speech from both domains. In this case, we first train a CNN using adversarial loss to map features from source to target. We, then, map the predicted features back to the source domain using an auxiliary network, and minimize a cycle-consistency loss between the original and reconstructed source features.
To prevent the CNN from over-fitting when trained on limited amounts of data, we present a simple regularizing technique. Our unsupervised adaptation approach using feature mapping, also complements its supervised counterpart, where adaptation is done using labeled data from both domains. We focus on three domain mismatch scenarios: (1) sampling frequency mismatch between the domains, (2) channel mismatch, and (3) robustness to far-field and noisy speech acquired from wild conditions.
Title: Statistical Signal Processing Methods for Epigenetic Landscape Analysis
Abstract: Since the DNA structure was discovered in 1953, a great deal of effort has been put into studying this molecule in detail. We now know DNA comprises an organism’s genetic makeup and constitutes a blueprint for life. The study of DNA has dramatically increased our knowledge about cell function and evolution and has led to remarkable discoveries in biology and medicine.
Just as DNA is replicated during cell division, several chemical marks are also passed onto progeny during this process. Epigenetics studies these marks and represents a fascinating research area given their crucial role. Among all known epigenetic marks, 5mc DNA methylation is probably one of the most important ones given its well-established association with various biological processes, such as development and aging, and disease, such as cancer. The work in this dissertation focuses primarily on this epigenetic mark, although it has the potential to be applied to other heritable marks.
In the 1940s, Waddington introduced the term epigenetic landscape to conceptually describe cell pluripotency and differentiation. This concept lived in the abstract plane until Jenkinson et al. 2017 & 2018 estimated actual epigenetic landscapes from WGBS data, and the work led to startling results with biological implications in development and disease. Here, we introduce an array of novel computational methods that draw from that work. First, we present CPEL, a method that uses a variant of the original landscape proposed by Jenkinson et al., which, together with a new hypothesis testing framework, allows for the detection of DNA methylation imbalances between homologous chromosomes. Then, we present CpelTdm, a method that builds upon CPEL to perform differential methylation analysis between groups of samples using targeted bisulfite sequencing data. Finally, we extend the original probabilistic model proposed by Jenkinson et al. to estimate methylation landscapes and perform differential analysis from nanopore data.
Overall, this work addresses immediate needs in the study of DNA methylation. The methods presented here can lead to a better characterization of this critical epigenetic mark and enable biological discoveries with implications for diagnosing and treating complex human diseases.
Title: Deep Learning Based Face Image Synthesis
Abstract: Face image synthesis is an important problem in the biometrics and computer vision communities due to its applications in law enforcement and entertainment. In this thesis, we develop novel deep neural network models and associated loss functions for two face image synthesis problems, namely thermal to visible face synthesis and visual attribute to face synthesis.
In particular, for thermal to visible face synthesis, we propose a model which makes use of facial attributes to obtain better synthesis. We use attributes extracted from visible images to synthesize attribute-preserved visible images from thermal imagery. A pre-trained attribute predictor network is used to extract attributes from the visible image. Then, a novel multi-scale generator is proposed to synthesize the visible image from the thermal image guided by the extracted attributes. Finally, a pre-trained VGG-Face network is leveraged to extract features from the synthesized image and the input visible image for verification.
In addition, we propose another thermal to visible face synthesis method based on a self-attention generative adversarial network (SAGAN) which allows efficient attention-guided image synthesis. Rather than focusing only on synthesizing visible faces from thermal faces, we also propose to synthesize thermal faces from visible faces. Our intuition is based on the fact that thermal images also contain some discriminative information about the person for verification. Deep features from a pre-trained Convolutional Neural Network (CNN) are extracted from the original as well as the synthesized images. These features are then fused to generate a template which is then used for cross-modal face verification.
Regarding attribute to face image synthesis, we propose the Att2SK2Face model for face image synthesis from visual attributes via sketch. In this approach, we first synthesize a facial sketch corresponding to the visual attributes and then generate the face image based on the synthesized sketch. The proposed framework is based on a combination of two different Generative Adversarial Networks (GANs) – (1) a sketch generator network which synthesizes realistic sketch from the input attributes, and (2) a face generator network which synthesizes facial images from the synthesized sketch images with the help of facial attributes.
Finally, we propose another synthesis model, called Att2MFace, which can simultaneously synthesize multimodal faces from visual attributes without requiring paired data in different domains for training the network. We introduce a novel generator with multimodal stretch-out modules to simultaneously synthesize multimodal face images. Additionally, multimodal stretch-in modules are introduced in the discriminator which discriminates between real and fake images.
Title: Machine Learning for Beamforming in Ultrasound, Radar, and Audio
Abstract: Multi-sensor signal processing plays a crucial role in the working of several everyday technologies, from correctly understanding speech on smart home devices to ensuring aircraft fly safely. A specific type of multi-sensor signal processing called beamforming forms a central part of this thesis. Beamforming works by combining the information from several spatially distributed sensors to directionally filter information, boosting the signal from a certain direction but suppressing others. The idea of beamforming is key to the domains of ultrasound, radar, and audio.
Machine learning, succinctly defined by Tom Mitchell as “the study of algorithms that improve automatically through experience” is the other central part of this thesis. Machine learning, especially its sub-field of deep learning, has enabled breakneck progress in tackling several problems that were previously thought intractable. Today, machine learning powers many of the cutting edge systems we see on the internet for image classification, speech recognition, language translation, and more.
In this dissertation, we look at beamforming pipelines in ultrasound, radar, and audio from a machine learning lens and endeavor to improve different parts of the pipelines using ideas from machine learning. Starting off in the ultrasound domain, we use deep learning as an alternative to beamforming in ultrasound and improve the information extraction pipeline by simultaneously generating both a segmentation map and B-mode image of high quality directly from raw received ultrasound data.
Next, we move to the radar domain and study how deep learning can be used to improve signal quality in ultra-wideband synthetic aperture radar by suppressing radio frequency interference, random spectral gaps, and contiguous block spectral gaps. By training and applying the networks on raw single-aperture data prior to beamforming, it can work with myriad sensor geometries and different beamforming equations, a crucial requirement in synthetic aperture radar.
Finally, we move to the audio domain and derive a machine learning inspired beamformer to tackle the problem of ensuring the audio captured by a camera matches its visual content, a problem we term audiovisual zoom. Unlike prior work which is capable of only enhancing a few individual directions, our method enhances audio from a contiguous field of view.
Title: A Unified Visual Saliency Model for Neuromorphic Implementation
Abstract: Although computer capabilities have expanded tremendously, a significant wall remains between the computer and the human brain. The brain can process massive amounts of information obtained from a complex environment and control the entire body in real time with low energy consumption. This thesis tackles this mystery by modeling and emulating how the brain processes information based on the available knowledge of biological and artificial intelligence as studied in neuroscience, cognitive science, computer science, and computer engineering.
Saliency modeling relates to visual sense and biological intelligence. The retina captures and sends much data about the environment to the brain. However, as the visual cortex cannot process all the information in detail at once, the early stages of visual processing discard unimportant information. Because only the fovea has high-resolution imaging, individuals move their eyeballs in the direction of the important part of the scene. Therefore, eye movements can be thought of as an observable output of the early visual process in the brain. Saliency modeling aims to understand this mechanism and predict eye fixations.
Researchers have built biologically plausible saliency models that emulate the biological process from the retina through the visual cortex. Although many saliency models have been proposed, most are not bio-realistic. This thesis models the biological mechanisms for the perception of texture, depth, and motion. While texture plays a vital role in the perception process, defining texture in a mathematical way is not easy. Thus, it is necessary to build an architecture of texture processing based on the biological perception mechanism. Binocular stereopsis is another intriguing function of the brain. While scholars have evaluated many computational algorithms for stereovision, pursuing biological plausibility means implementing a neuromorphic method into a saliency model. Motion is another critical clue that helps animals survive. In this thesis, the motion feature is implemented in a bio-realistic way based on neurophysiological observation.
Moreover, the thesis will integrate these processes and propose a unified saliency model that can handle 3D dynamic scenes in a similar way to how the brain deals with the real environment. Thus, this investigation will use saliency modeling to examine intriguing properties of human visual processing and discuss how the brain achieves this remarkable capability.