Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.
Title: Towards building a clinically-inspired ultrasound innovation hub: Design, Development and Clinical Validation of novel Ultrasound hardware for Imaging, Therapeutics, Sensing and other applications.
Abstract: Ultrasound is a relatively established modality with a number of exciting, yet not fully explored applications, ranging from imaging and image-guided navigation, to tumor ablation, neuro-modulation, piezoelectric surgery, and drug delivery. In this talk, Dr. Manbachi will be discussing some of his ongoing projects aiming to address low-frequency bone sonography, minimally invasive ablation of neuro-oncology and implantable sensors for spinal cord blood flow measurements.
Bio: Dr. Manbachi is an Assistant Professor of Neurosurgery and Biomedical Engineering at Johns Hopkins University. His research interests include applications of sound and ultrasound to various neurosurgical procedures. These applications include imaging the spine and brain, detection of foreign body objects, remote ablation of brain tumors, monitoring of blood flow and tissue perfusion, as well as other upcoming interesting applications such as neuromodulation and drug delivery. His teaching activities mentorship with BME Design Teams as well as close collaboration with clinical experts in Surgery and Radiology at Johns Hopkins.
His previous work included the development of ultrasound-guided spine surgery. He obtained his PhD from the University of Toronto, under the supervision of Dr. Richard S.C. Cobbold. Prior to joining Johns Hopkins, he was a postdoctoral fellow at Harvard-MIT Division of Health Sciences and Technology (2015-16) and the founder and CEO of Spinesonics Medical (2012–2015), a spinoff from his doctoral studies.
Amir is an author on >25 peer-reviewed journal articles, > 30 conference proceedings, 10 invention disclosures / patent applications and a book entitled “Towards Ultrasound-guided Spinal Fusion Surgery.” He has mentored 150+ students, has so far been raised $1.1M of funding and his interdisciplinary research has been recognized by a number of awards, including University of Toronto’s 2015 Inventor of Year award, Ontario Brain Institute 2013 fellowship, Maryland Innovation Initiative and Cohen Translational Funding.
Dr. Manbachi has extensive teaching experience, particularly in the field of engineering design, medical imaging and entrepreneurship (both at Hopkins and Toronto), for which he received the University of Toronto’s Teaching Excellence award in 2014, as well as Johns Hopkins University career centre’s award nomination for students’ “Career Champion” (2018) and finally Johns Hopkins University Whiting School of Engineering’s Robert B. Pond Sr. Excellence in Teaching Excellence Award (2018).
Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.
Title: 5G Security – Opportunities and Challenges
Abstract: Software Defined Networking (SDN) and Network Function Virtualization (NFV) are the key pillars of future networks, including 5G and beyond that promise to support emerging applications such as enhanced mobile broadband, ultra-low latency, massive sensing type applications while providing the resiliency in the network. Service providers and other vertical industries (e.g., Connected Cars, IOT, eHealth) can leverage SDN/NFV to provide flexible and cost-effective service without compromising the end user quality of service (QoS). While NFV and SDN open up the door for flexible networks and rapid service creation, these also offer both security opportunities while also introducing additional challenges and complexities, in some cases. With the rapid proliferation of 4G and 5G networks, operators have now started the trial deployment of network function virtualization, especially with the introduction of various virtualized network elements in the access and core networks. While several standardization bodies (e.g., ETSI, 3GPP, NGMN, ATIS, IEEE) have started looking into the many security issues introduced by SDN/NFV, additional work is needed with larger security community including vendors, operators, universities, and regulators.
This talk will address evolution of cellular technologies towards 5G but will largely focus on various security challenges and opportunities introduced by SDN/NFV and 5G networks such as Hypervisor, Virtual Network Functions (VNFs), SDN controller, orchestrator, network slicing, cloud RAN, edge cloud, and security function virtualization. This talk will introduce a threat taxonomy for 5G security from an end-to-end system perspective, potential threats introduced by these enablers, and associated mitigation techniques. At the same time, some of the opportunities introduced by these pillars will also be discussed. This talk will also highlight some of the ongoing activities within various standards communities and will illustrate a few deployment use case scenarios for security including threat taxonomy for both operator and enterprise networks.
Bio: Ashutosh Dutta is currently senior scientist and 5G Chief Strategist at the Johns Hopkins University Applied Physics Laboratory (JHU/APL). He is also a JHU/APL Sabbatical Fellow and adjunct faculty at The Johns Hopkins University. Ashutosh also serves as the chair for Electrical and Computer Engineering Department of Engineering for Professional Program at Johns Hopkins University. His career, spanning more than 30 years, includes Director of Technology Security and Lead Member of Technical Staff at AT&T, CTO of Wireless for NIKSUN, Inc., Senior Scientist and Project Manager in Telcordia Research, Director of the Central Research Facility at Columbia University, adjunct faculty at NJIT, and Computer Engineer with TATA Motors. He has more than 100 conference, journal publications, and standards specifications, three book chapters, and 31 issued patents. Ashutosh is co-author of the book, titled, “Mobility Protocols and Handover Optimization: Design, Evaluation and Application” published by IEEE and John & Wiley.
As a Technical Leader in 5G and security, Ashutosh has been serving as the founding Co-Chair for the IEEE Future Networks Initiative that focuses on 5G standardization, education, publications, testbed, and roadmap activities. Ashutosh serves as IEEE Communications Society’s Distinguished Lecturer for 2017-2020 and as an ACM Distinguished Speaker (2020-2022) Ashutosh has served as the general Co-Chair for the premier IEEE 5G World Forums and has organized 65 5G World Summits around the world.
Ashutosh served as the chair for IEEE Princeton / Central Jersey Section, Industry Relation Chair for Region 1 and MGA, Pre-University Coordinator for IEEE MGA and vice chair of Education Society Chapter of PCJS. He co-founded the IEEE STEM conference (ISEC) and helped to implement EPICS (Engineering Projects in Community Service) projects in several high schools. Ashutosh has served as the general Co-Chair for the IEEE STEM conference for the last 10 years. Ashutosh served as the Director of Industry Outreach for IEEE Communications Society from 2014-2019. He was recipient of the prestigious 2009 IEEE MGA Leadership award and 2010 IEEE-USA professional leadership award. Ashutosh currently serves as Member-At-Large for IEEE Communications Society for 2020-2022.
Ashutosh obtained his BS in Electrical Engineering from NIT Rourkela, India; MS in Computer Science from NJIT; and Ph.D. in Electrical Engineering from Columbia University, New York under the supervision of Prof. Henning Schulzrinne. Ashutosh is a Fellow of IEEE and senior member of ACM.
Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.
Title: Student-Teacher Learning Techniques for Bilingual and Low Resource OCR
Abstract: Optical Character Recognition (OCR) is the automatic generation of a transcription given a line image of text. Current methods have been very successful on printed English text, with Character Error Rates of less than 1¥%. However, clean datasets are not commonly seen in real life applications. There is a move in OCR towards `text in the wild’, conditions where there are lower resolution images like store fronts, street sign, and billboards. Oftentimes these texts contain multiple scripts, especially in countries where multiple languges are spoken. In addition, Latin characters are wildly seen no matter what language. The presence of multilingual text poses a unique challenge.
Traditional OCR methods involve text localization, script identification, and then text recognition. A separate system is used in each task and the results from one system are passed to the next. However, the downside of this pipeline approach is that errors propagate downstream and there is no way of providing feedback upstream. These downsides can be mitigated with fully integrated approaches, where one large system does text localization, script identification, and text recognition jointly. These approaches are also sometimes known as end-to-end approaches in literature.
With larger and larger networks, there is also a need for a greater amount of training data. However, this data may be difficult to obtain if the target language is low resource. There are also problems if the data that is obtained is in a slightly different domain, for example, printed versus handwritten text. This is where synthetic data generation techniques and domain adaptation techniques can be helpful.
Given these current challenges in OCR, this thesis proposal is focused on training an integrated (ie: end-to-end) bilingual systems and domain adaptation techniques. Both these objectives can be achieved using student-teacher learning methods. The basics of this approach is to have a trained teacher model add an additional loss function while training a student model. The outputs of the teacher will be used as soft targets for the student to learn. The following experiments will be performed:
Title: Optical coherence tomography signal processing in complex domain
Abstract: Optical coherence tomography (OCT) plays an indispensable role in clinical fields such as ophthalmology and dermatology. Over the past 30 years, OCT has gone through tremendous developments, which come with both hardware improvements and novel signal processing techniques. Hardware improvements such as the use of adaptive optics (AO) and the use of vertical-cavity surface-emitting laser (VCSEL) help push the fundamental limits of OCT imaging capability. Novel signal processing techniques aim to push the imaging capability beyond current hardware architecture limitations. Often, novel signal processing techniques achieve better performances than hardware modifications while keeping the cost to the lowest. The purpose of this dissertation proposal is to develop novel OCT signal processing techniques that provide new imaging capabilities and overcome current imaging limitations.
OCT signal, as the result of the interference between the sample back-scattering light and the reference light, is complex and contains both amplitude and phase information. The amplitude information is mostly used for OCT structural imaging, while the phase information is mostly used for OCT functional imaging. Usually, the amplitude-based methods are more robust since they are less prone to noise, while the phase-based methods are better in quantifying precision measurements since they are more sensitive to micro displacements. This dissertation proposal focuses on three advanced OCT signal processing techniques in both amplitude and phase domain.
The first signal processing technique proposed is the amplitude-based BC-mode OCT image visualization for microsurgery guidance, where multiple sparsely sampled B-scans are combined to generate a single cross-section image with enhanced instrument and tissue layer visibility and reduced shadowing artifacts. The performance of the proposed method is demonstrated by guiding a 30-gauge needle into an ex-vivo human cornea.
The second signal processing technique proposed is the amplitude-based optical flow OCT (OFOCT) for determining accurate velocity fields. Modified continuity constraint is used to compensate the Fourier-domain OCT (FDOCT) sensitivity fall-off. Spatial-temporal smoothness constraints are used to make the optical flow problem well-posed and reduce noises in the velocity fields. The accuracy of the proposed method is verified through phantom flow experiments by using a diluted milk powder solution as the scattering medium, in both cases of advective flow and turbulent flow.
The third signal processing technique proposed is phase-based. A wrapped Gaussian mixture model (WGMM) is proposed to stabilize the phase of swept-source OCT (SSOCT) systems. The OCT signal phase is divided into several components and each component is fully analyzed. The WGMM is developed based on the previous analysis. A closed-form iteration solution of the WGMM is derived using the expectation-maximization (EM) algorithm. The performance of the proposed method is demonstrated through OCT imaging of ex-vivo mice cornea and anterior chamber.
For all the three proposed methods above, process has been made in theoretical modeling, numerical implementations, and experimental verifications. All the algorithms have been implemented in the graphic processing unit (GPU) in the OCT system for real-time data processing. Preliminary results demonstrate good performances of these proposed methods. The final thesis work will include optimizing the proposed methods and applying the implemented algorithms to both ex-vivo and in-vivo biomedical research for the overall system testing and analysis.
Title: Single Image Based Crowd Counting Using Deep Learning
Abstract: Estimating count and density maps from crowd images has a wide range of applications such as video surveillance, traffic monitoring, public safety and urban planning. In addition, techniques developed for crowd counting can be applied to related tasks in other fields of study such as cell microscopy, vehicle counting and environmental survey. The task of crowd counting and density map estimation from a single image is a difficult problem since it suffers from multiple issues like occlusions, perspective changes, background clutter, non-uniform density, intra-scene and inter-scene variations in scale and perspective. These issues are further exacerbated in highly congested scenes. In order to overcome these challenges, we propose a variety of different deep learning architectures that specifically incorporate various aspects such as global/local context information, attention mechanisms, specialized iterative and multi-level multi-pathway fusion schemes for combining information from multiple layers in a deep network. Through extensive experimentations and evaluations on several crowd counting datasets, we demonstrate that the proposed networks achieve significant improvements over existing approaches.
We also recognize the need for large amounts of data for training the deep networks and their inability to generalize to new scenes and distributions. To overcome this challenge, we propose novel semi-supervised and weakly-supervised crowd counting techniques that effectively leverage large amounts of unlabeled/weakly-labeled data. In addition to developing techniques with ability to learn from limited labeled data, we also introduce a new large-scale crowd counting dataset which can be used to train considerably larger networks. The proposed data consists of 4,372 high resolution images with 1.51 million annotations. We made explicit efforts to ensure that the images are collected under a variety of diverse scenarios and environmental conditions. The dataset provides a richer set of annotations like dots, approximate bounding boxes, blur levels, etc.
Title: Towards End-to-end Non-autoregressive speech applications
Abstract: Sequence labeling is a fascinating and challenging topic in the speech research community. The Sequence-to-sequence model is proposed for various sequence labeling tasks as a particularly popular end-to-end model. Autoregressive models are the dominant approach that predicts the label one by one, conditioning on previous results. This makes the training easier and more stable. However, this simplicity also results in inefficiency for the inference, particularly with those lengthy output sequences. To speed up the inference procedure, researchers start to be interested in another type of sequence-to-sequence model, known as non-autoregressive models. In contrast to the autoregressive models, non-autoregressive models predict the whole sequence within a constant number of iterations.
In this proposal, two different types of non-autoregressive models for speech applications are proposed: mask-based approach and noise-based approach. To demonstrate the effectiveness of the two proposed methods, we explored their usage for two important topics: speech recognition and speech synthesis. Experiments reveal that the proposed methods can match the performance of state-of-the-art autoregressive models with a much shorter inference time.
Hosted by the
Department of Electrical and Computer Engineering, Johns Hopkins University
and Technical Co-sponsorship by the IEEE Information Theory Society
CISS 2021 is a forum for scientists, engineers, and academics to present their latest research results and developments in multiple areas of Information Sciences and Systems. Authors will present unpublished papers describing theoretical advances, applications, and ideas in the fields of:
Title: Deep Learning Based Methods for Ultrasound Image Segmentation and Magnetic Resonance Image Reconstruction
Abstract: In recent years, deep learning (DL) algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. It has shown promising performances in many medical image analysis (MIA) problems, including classification, segmentation and reconstruction. However, the inherent difference between natural images and medical images (Ultrasound, MRI etc.) have hinder the performance of such DL-based method that originally designed for natural images. Another obstacle for DL-based MIA comes the availability of large-scale training dataset as it have shown that large and diverse dataset can effectively improve the robustness and generalization ability of DL networks.
In this thesis, we develop various deep learning-based approaches to address two medical image analysis problems. In the first problem, we focus on computer assisted orthopedic surgery (CAOS) applications that use ultrasound as intra-operative imaging modality. This problem requires an automatic and real-time algorithm to detect and segment bone surfaces and shadows in order to provide guidance for the orthopedic surgeon to a standardized diagnostic viewing plane with minimal artifacts. Due to the limitation of relatively small datasets and image differences from multiple ultrasound machines, we develop DL-based frameworks that leverage a local phase filtering technique and integrate it into the DL framework, thus improving the robustness.
Finally, we propose a fast and accurate Magnetic Resonance Imaging (MRI) image reconstruction framework using a novel Convolutional Recurrent Neural Network (CRNN). Extensive experiments and evaluation on knee and brain datasets have shown its outstanding results compared to the traditional compressed sensing and other DL-based methods. Furthermore, we extend this method to enable multi sequence-reconstruction where T2-weighted MRI image can provide guidance and improvement to the reconstruction of amid proton transfer-weighted
Carlos Castillo, Department of Electrical and Computer Engineering
Shanshan Jiang, Department of Radiology and Radiological Science
Ilker Hacihaliloglu, Department of Biomedical Engineering (Rutgers University)
Title: Harmonization of Structural MRI for Consistent Image Analysis
Abstract: Magnetic resonance imaging (MRI) is a flexible, non-invasive medical imaging modality that uses strong magnetic fields and radio-frequency pulses to produce images with excellent contrast in the soft tissues of the body. MRI is commonly used in diagnosis and monitoring of many conditions, but is especially useful in disorders of the central nervous system, such as multiple sclerosis (MS), where the brain and spinal cord are heavily involved. An MRI scan normally contains a number of imaging volumes, where different pulse sequence parameters are selected to highlight different tissue properties. These volumes can then be used together to provide complimentary information about the imaged area. Flexible design of the imaging system allows for a variety of questions to be answered during a single scanning session, but also comes with a cost. As there are many parameters to define when designing an imaging sequence, there is no common standard that is widely used. These differences lead to variability in image appearance between manufacturers, imaging centers, and even individual scanners. As an example, a commonly acquired MR volume is a T1-weighted image, where differences in a specific magnetic property (longitudinal relaxation time or T1) is highlighted. However, this general effect can be achieved with a myriad of different pulse sequences even before the individual parameters are considered. This is perhaps most apparent in the difference between T1-weighted images with and without a preparatory inversion pulse, where images with an inversion pulse tend to have a much clearer contrast between grey and white matter in the brain. With the advent of advanced machine learning methods, variations such as the example above create a large problem, as accurate methods become closely tied to the data used to train them and any variation in inputs can have unknown effects on output quality. This problem sets the stage for image harmonization, where synthetic “harmonized” images are produced after acquisition to provide consistent inputs to image analysis routines.
This thesis aims to develop harmonization strategies for structural brain MR images that will allow for the synthesis of harmonized images from differing inputs. These images can then be used downstream in automated analysis pipelines, most commonly whole-brain segmentation for volumetric analysis. Recently, deep learning-based techniques have been shown to be excellent candidates in the realm of image synthesis and can be readily incorporated in harmonization tasks. However, this is complicated, as training data (especially in multi-site settings) is rarely available. This work will approach these problems by covering three main topics:
Title: Intraoperative Optical Coherence Tomography Guided Deep Anterior Lamellar Keratoplasty
Abstract: Deep anterior lamellar keratoplasty (DALK) is a highly challenging procedure requiring micron accuracy to guide a “big bubble” needle into the stroma of the cornea down to Descemet’s Membrane (DM). It has important advantages over Penetrating keratoplasty (PK) including lower rejection rate, less endothelial cell loss, and increased graft survival. Currently, this procedure relies heavily on the visualization through a surgical microscope, the surgeon’s own surgical experience, and tactile feel to determine the relative position of the needle and DM. Optical coherence tomography (OCT) is a well-established, non-invasive optical imaging technology that can provide high-speed, high-resolution, three-dimension images of biological samples. Since it was first demonstrated in 1991, OCT has emerged as a leading technology for ophthalmic visualization, especially for retinal structures, and has been widely applied in ophthalmic surgery and research. Common-path (CP) OCT systems use single A-scan image to deduce the tissue layer information and can be operated at a much higher speed. This synergizes well with handheld tools and automated surgical systems which require fast response time. CP-OCT has been integrated into a wide range of microsurgical tools for procedures such as epiretinal membrane peeling and subretinal injection.
In this proposal, the common-path swept-source OCT system (CP-SSOCT) is proposed to guide DALK procedures. The OCT distal sensor integrated needle and OCT guided micro-control ocular surgical system (AUTO-DALK) will be designed and evaluated. This device will allow for the autonomous insertion of a needle for pneumo-dissection based on the depth-sensing results from the OCT system. An earlier prototype of AUTO-DALK was tested on the ex-vivo porcine cornea including the comparison of expert manual needle insertion. The result showed the precision and consistency of the needle placement were increased, which could lead to better visual outcomes and fewer complications. Future work will include improving the overall design for in-vivo testing and clinical use, advanced convolutional neural network based tracking, and system validation on larger sample size.
Jin U. Kang (adviser), Department of Electrical and Computer Engineering
Israel Gannot, Department of Electrical and Computer Engineering
Xingde Li, Department of Biomedical Engineering