Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.
Title: Detecting Unknown Instances Using CNNs
Abstract: Deep convolutional neural networks (DCNNs) have shown impressive performance improvements for object detection and recognition problems. However, a vast majority of DCNN-based recognition methods are designed for a closed world, where the primary assumption is that all categories are known a priori. In many real-world applications, this assumption does not necessarily hold. Generally, incomplete knowledge of the world is present at training time, and unknown classes can be submitted to an algorithm during testing. The goal of a visual recognition system is then to reject samples from unknown classes and classify samples from known classes.
In the first part of my talk, I will present new DCNNs for anomaly detection based on one-class classification. The main idea is to use a zero centered Gaussian noise in the feature space as the pseudo-negative class and train the network using the cross-entropy loss. Also, a method in which both classifier and feature representations are learned together in an end-to-end fashion will be presented. In the second part of the talk, I will present a multi-class category detection using a network which utilizes both global and local information to predict whether the test image belongs to one of the known classes or an unknown category. Specifically, the models is trained using a network to perform image-level category prediction and another network to perform patch-level category prediction. We evaluate the effectiveness all these methods on multiple publicly available datasets and show that these approaches achieve better performance compared to previous state-of-the-art methods.
Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.
Title: Student-Teacher Learning Techniques for Bilingual and Low Resource OCR
Abstract: Optical Character Recognition (OCR) is the automatic generation of a transcription given a line image of text. Current methods have been very successful on printed English text, with Character Error Rates of less than 1¥%. However, clean datasets are not commonly seen in real life applications. There is a move in OCR towards `text in the wild’, conditions where there are lower resolution images like store fronts, street sign, and billboards. Oftentimes these texts contain multiple scripts, especially in countries where multiple languges are spoken. In addition, Latin characters are wildly seen no matter what language. The presence of multilingual text poses a unique challenge.
Traditional OCR methods involve text localization, script identification, and then text recognition. A separate system is used in each task and the results from one system are passed to the next. However, the downside of this pipeline approach is that errors propagate downstream and there is no way of providing feedback upstream. These downsides can be mitigated with fully integrated approaches, where one large system does text localization, script identification, and text recognition jointly. These approaches are also sometimes known as end-to-end approaches in literature.
With larger and larger networks, there is also a need for a greater amount of training data. However, this data may be difficult to obtain if the target language is low resource. There are also problems if the data that is obtained is in a slightly different domain, for example, printed versus handwritten text. This is where synthetic data generation techniques and domain adaptation techniques can be helpful.
Given these current challenges in OCR, this thesis proposal is focused on training an integrated (ie: end-to-end) bilingual systems and domain adaptation techniques. Both these objectives can be achieved using student-teacher learning methods. The basics of this approach is to have a trained teacher model add an additional loss function while training a student model. The outputs of the teacher will be used as soft targets for the student to learn. The following experiments will be performed:
Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.
Title: Optical coherence tomography signal processing in complex domain
Abstract: Optical coherence tomography (OCT) plays an indispensable role in clinical fields such as ophthalmology and dermatology. Over the past 30 years, OCT has gone through tremendous developments, which come with both hardware improvements and novel signal processing techniques. Hardware improvements such as the use of adaptive optics (AO) and the use of vertical-cavity surface-emitting laser (VCSEL) help push the fundamental limits of OCT imaging capability. Novel signal processing techniques aim to push the imaging capability beyond current hardware architecture limitations. Often, novel signal processing techniques achieve better performances than hardware modifications while keeping the cost to the lowest. The purpose of this dissertation proposal is to develop novel OCT signal processing techniques that provide new imaging capabilities and overcome current imaging limitations.
OCT signal, as the result of the interference between the sample back-scattering light and the reference light, is complex and contains both amplitude and phase information. The amplitude information is mostly used for OCT structural imaging, while the phase information is mostly used for OCT functional imaging. Usually, the amplitude-based methods are more robust since they are less prone to noise, while the phase-based methods are better in quantifying precision measurements since they are more sensitive to micro displacements. This dissertation proposal focuses on three advanced OCT signal processing techniques in both amplitude and phase domain.
The first signal processing technique proposed is the amplitude-based BC-mode OCT image visualization for microsurgery guidance, where multiple sparsely sampled B-scans are combined to generate a single cross-section image with enhanced instrument and tissue layer visibility and reduced shadowing artifacts. The performance of the proposed method is demonstrated by guiding a 30-gauge needle into an ex-vivo human cornea.
The second signal processing technique proposed is the amplitude-based optical flow OCT (OFOCT) for determining accurate velocity fields. Modified continuity constraint is used to compensate the Fourier-domain OCT (FDOCT) sensitivity fall-off. Spatial-temporal smoothness constraints are used to make the optical flow problem well-posed and reduce noises in the velocity fields. The accuracy of the proposed method is verified through phantom flow experiments by using a diluted milk powder solution as the scattering medium, in both cases of advective flow and turbulent flow.
The third signal processing technique proposed is phase-based. A wrapped Gaussian mixture model (WGMM) is proposed to stabilize the phase of swept-source OCT (SSOCT) systems. The OCT signal phase is divided into several components and each component is fully analyzed. The WGMM is developed based on the previous analysis. A closed-form iteration solution of the WGMM is derived using the expectation-maximization (EM) algorithm. The performance of the proposed method is demonstrated through OCT imaging of ex-vivo mice cornea and anterior chamber.
For all the three proposed methods above, process has been made in theoretical modeling, numerical implementations, and experimental verifications. All the algorithms have been implemented in the graphic processing unit (GPU) in the OCT system for real-time data processing. Preliminary results demonstrate good performances of these proposed methods. The final thesis work will include optimizing the proposed methods and applying the implemented algorithms to both ex-vivo and in-vivo biomedical research for the overall system testing and analysis.
Title: Towards End-to-end Non-autoregressive speech applications
Abstract: Sequence labeling is a fascinating and challenging topic in the speech research community. The Sequence-to-sequence model is proposed for various sequence labeling tasks as a particularly popular end-to-end model. Autoregressive models are the dominant approach that predicts the label one by one, conditioning on previous results. This makes the training easier and more stable. However, this simplicity also results in inefficiency for the inference, particularly with those lengthy output sequences. To speed up the inference procedure, researchers start to be interested in another type of sequence-to-sequence model, known as non-autoregressive models. In contrast to the autoregressive models, non-autoregressive models predict the whole sequence within a constant number of iterations.
In this proposal, two different types of non-autoregressive models for speech applications are proposed: mask-based approach and noise-based approach. To demonstrate the effectiveness of the two proposed methods, we explored their usage for two important topics: speech recognition and speech synthesis. Experiments reveal that the proposed methods can match the performance of state-of-the-art autoregressive models with a much shorter inference time.
Title: Harmonization of Structural MRI for Consistent Image Analysis
Abstract: Magnetic resonance imaging (MRI) is a flexible, non-invasive medical imaging modality that uses strong magnetic fields and radio-frequency pulses to produce images with excellent contrast in the soft tissues of the body. MRI is commonly used in diagnosis and monitoring of many conditions, but is especially useful in disorders of the central nervous system, such as multiple sclerosis (MS), where the brain and spinal cord are heavily involved. An MRI scan normally contains a number of imaging volumes, where different pulse sequence parameters are selected to highlight different tissue properties. These volumes can then be used together to provide complimentary information about the imaged area. Flexible design of the imaging system allows for a variety of questions to be answered during a single scanning session, but also comes with a cost. As there are many parameters to define when designing an imaging sequence, there is no common standard that is widely used. These differences lead to variability in image appearance between manufacturers, imaging centers, and even individual scanners. As an example, a commonly acquired MR volume is a T1-weighted image, where differences in a specific magnetic property (longitudinal relaxation time or T1) is highlighted. However, this general effect can be achieved with a myriad of different pulse sequences even before the individual parameters are considered. This is perhaps most apparent in the difference between T1-weighted images with and without a preparatory inversion pulse, where images with an inversion pulse tend to have a much clearer contrast between grey and white matter in the brain. With the advent of advanced machine learning methods, variations such as the example above create a large problem, as accurate methods become closely tied to the data used to train them and any variation in inputs can have unknown effects on output quality. This problem sets the stage for image harmonization, where synthetic “harmonized” images are produced after acquisition to provide consistent inputs to image analysis routines.
This thesis aims to develop harmonization strategies for structural brain MR images that will allow for the synthesis of harmonized images from differing inputs. These images can then be used downstream in automated analysis pipelines, most commonly whole-brain segmentation for volumetric analysis. Recently, deep learning-based techniques have been shown to be excellent candidates in the realm of image synthesis and can be readily incorporated in harmonization tasks. However, this is complicated, as training data (especially in multi-site settings) is rarely available. This work will approach these problems by covering three main topics:
Title: Intraoperative Optical Coherence Tomography Guided Deep Anterior Lamellar Keratoplasty
Abstract: Deep anterior lamellar keratoplasty (DALK) is a highly challenging procedure requiring micron accuracy to guide a “big bubble” needle into the stroma of the cornea down to Descemet’s Membrane (DM). It has important advantages over Penetrating keratoplasty (PK) including lower rejection rate, less endothelial cell loss, and increased graft survival. Currently, this procedure relies heavily on the visualization through a surgical microscope, the surgeon’s own surgical experience, and tactile feel to determine the relative position of the needle and DM. Optical coherence tomography (OCT) is a well-established, non-invasive optical imaging technology that can provide high-speed, high-resolution, three-dimension images of biological samples. Since it was first demonstrated in 1991, OCT has emerged as a leading technology for ophthalmic visualization, especially for retinal structures, and has been widely applied in ophthalmic surgery and research. Common-path (CP) OCT systems use single A-scan image to deduce the tissue layer information and can be operated at a much higher speed. This synergizes well with handheld tools and automated surgical systems which require fast response time. CP-OCT has been integrated into a wide range of microsurgical tools for procedures such as epiretinal membrane peeling and subretinal injection.
In this proposal, the common-path swept-source OCT system (CP-SSOCT) is proposed to guide DALK procedures. The OCT distal sensor integrated needle and OCT guided micro-control ocular surgical system (AUTO-DALK) will be designed and evaluated. This device will allow for the autonomous insertion of a needle for pneumo-dissection based on the depth-sensing results from the OCT system. An earlier prototype of AUTO-DALK was tested on the ex-vivo porcine cornea including the comparison of expert manual needle insertion. The result showed the precision and consistency of the needle placement were increased, which could lead to better visual outcomes and fewer complications. Future work will include improving the overall design for in-vivo testing and clinical use, advanced convolutional neural network based tracking, and system validation on larger sample size.
Jin U. Kang (adviser), Department of Electrical and Computer Engineering
Israel Gannot, Department of Electrical and Computer Engineering
Xingde Li, Department of Biomedical Engineering
Title: Coherence-based learning from raw ultrasound data for breast mass diagnosis
Abstract: Breast cancer is the most prevalent cancer among women in the United States, with approximately one in eight women being diagnosed in their lifetimes. Imaging modalities such as mammography, MRI, and ultrasound are employed to non-invasively visualize breast masses in order to determine the need for a biopsy. However, each of these methods results in a significant number of patients requiring biopsies of benign masses. Ultrasound in particular is praised for its low cost, painlessness, and portability, yet the false positive rate of breast ultrasound can be as high as 93% depending on the type of mass in question. Most commonly, diagnosis is performed using the brightness-mode (B-mode) image present on most clinical ultrasound scanners, which transitions naturally to the use of B-mode images for segmentation and classification of breast masses. Ultimately, segmentation and classification of breast masses can be summarized as analysis of a grayscale image. While this approach has been successful, information is lost during the B-mode image formation process.
An alternative approach to the lossy process of information extraction from B-mode images is to leverage features (e.g., spatial coherence) of backscattered ultrasound waves to determine the content of a breast mass. I will first describe my contributions to improve the diagnostic quality of breast ultrasound images by leveraging spatial coherence information. Next, I will present my deep learning approach to overcome limitations with real-time implementation of coherence-based imaging techniques. Finally, I will present a new method to learn the high-dimensional features encoded within backscattered ultrasound waves in order to differentiate benign from malignant breast masses.
Title: Engineering Colloidal Quantum-Confined Nanomaterials for Multi-junction Solar Cell Applications
Abstract: Current single junction solar cell technologies are rapidly approaching their theoretical limits of approximately 33% power conversion efficiency. Semiconductor nanoparticles such as colloidal quantum dots (CQDs) are of interest for photovoltaic applications due to their infrared absorption, size-tunable optical properties and low-cost solution processability. Lead sulfide (PbS) CQDs offer the potential to increase solar cell efficiencies via multi-junction architectures due to these properties. This project aims to develop new strategies for implementing PbS CQDs as a material for multi-junction architectures to improve solar cell efficiencies and expand potential applications.
The first phase of the proposed research begins with developing a better-performing single junction PbS CQD solar cell by improving the performance-limiting hole transport layer HTL) in these devices. We will employ two methods to improve and replace this layer. First, we will use sulfur infusion via electron beam evaporation to alter the stoichiometry of the standard HTL. We also plan to completely replace the standard HTL with 2D nanoflakes of tungsten diselenide, an atomically-thin semiconducting transition metal dichalcogenide. The second phase of the reserach involves developing a PbS CQD multi-junction solar cell, including a novel recombination layer. The third phase of the research involves developing a hybrid multi-junction strategy in which PbS CQD films employing photonic band engineering for spectral selectivity serve as the infrared cell and other materials serve as the visible cell. The ultimate goal of these three research phases is to use photonic and materials engineering to improve efficiency and flexibility in CQD-based multi-junction solar cells to meet the demand for affordable, sustainable solar energy.
Title: Early prediction of adverse clinical events and optimal intervention in ICUs
Abstract: Personalized healthcare is a rapidly evolving research area with tremendous potential for optimizing patient care strategies and improving patient outcomes. Traditionally, clinical decision making relies on assessment and intervention based on the collective experience of physicians. Using big-data analytics techniques, we can now harness data-driven models to enable early prediction of patients at risk of adverse clinical events. These predictive models can provide timely analytical information to physicians facilitating early therapeutic intervention and efficient management of patients in intensive care units (ICUs).
In addition to early prediction, it is equally important to optimize intervention strategies for critically ill patients. One such urgent need is to optimally oxygenate COVID-19 patients diagnosed with acute respiratory distress syndrome (ARDS). Moderate to severe ARDS patients generally require mechanical ventilation to improve oxygen saturation and to reduce the risk of organ failure and death. The most common ventilator settings across all modes of mechanical ventilation are positive end-expiratory pressure (PEEP) and fraction of inspired oxygen (FiO2). Increasing either of these settings is expected to increase oxygen saturation. However, prolonged ventilation of patients with high PEEP and FiO2 significantly increases the risk of ventilator associated lung injury. Therefore, an optimal strategy is required to improve patient outcomes.
This thesis presents two overarching aims: (1) early prediction of adverse events and (2) optimal intervention for mechanically ventilated patients. In contrast to fixed lead-time prediction models in prior work, our methodology proposes a new framework which hypothesizes the presence of a time-varying pre-event physiologic state that differentiates the target patients from the control group. We also present a unique approach to patient risk-stratification using unsupervised clustering technique that could enable identification of a high-risk group among all positive predicted cases with a positive predictive value of more than 93% when applied to multiple organ dysfunction prediction.
In the second aim, we propose a novel application of data-driven linear parameter varying systems to capture time-varying dynamics of oxygen saturation in response to ventilator settings with a changing physiological state of a patient and its comparison with linear time invariant models. Most prior studies on closed loop ventilator control have used stepwise, rule-based procedures, fuzzy logic, and a combination of rule-based methods and proportional integral derivative (PID) controller for closed loop control of FiO2. Other studies have worked on control strategies based on ventilator measured variables and on various mathematical lung models. In contrast we design optimal closed-loop ventilator strategies that are model based. A simulation of optimal ventilation settings for maintaining desired oxygen saturation using feedback control of LPV systems is presented.
Title: Photoacoustic imaging to detect major blood vessels and nerves during neurosurgery and head and neck surgery
Abstract: Real-time intraoperative guidance during minimally invasive neurosurgical and head and neck procedures is often limited to endoscopy, CT-guided image navigation, and electromyography, which are generally insufficient to locate major blood vessels and nerves hidden by tissue. Accidental damage to these hidden structures has incidence rates of 6.8% in surgeries to remove pituitary tumors (i.e., endonasal transsphenoidal surgery) and 3-4% in surgeries to remove parotid tumors (i.e., parotidectomy), often resulting in severe consequences, such as patient blindness, paralysis, and death. Photoacoustic imaging is a promising emerging imaging technique to provide real-time guidance of subsurface blood vessels and nerves during these surgeries.
Limited optical penetration through bone and the presence of acoustic clutter, reverberations, aberration, and attenuation can degrade photoacoustic image quality and potentially corrupt the usefulness of this promising intraoperative guidance technique. In order to mitigate image degradation, photoacoustic imaging system parameters may be adjusted and optimized to cater to the specific imaging environment. In particular, parameter adjustment can be categorized into the optimization of photoacoustic signal generation and the optimization of photoacoustic image formation (i.e., beamforming) and image display methods.
In this talk, I will describe my contributions to leverage amplitude- and coherence-based beamforming techniques to improve photoacoustic image display for the detection of blood vessels during endonasal transsphenoidal surgery. I will then present my contributions to the derivation of a novel photoacoustic spatial coherence theory, which provides a fundamental understanding critical to the optimization of coherence-based photoacoustic images. Finally, I will present a plan to translate this work from the visualization of blood vessels during neurosurgery to the visualization of nerves during head and neck surgery. Successful completion of this work will lay the foundation necessary to introduce novel, intraoperative, photoacoustic image guidance techniques that will eliminate the incidence of accidental injury to major blood vessels and nerves during minimally invasive surgeries.